00:00:00.001 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v23.11" build number 139 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3640 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.012 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.013 The recommended git tool is: git 00:00:00.013 using credential 00000000-0000-0000-0000-000000000002 00:00:00.015 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.028 Fetching changes from the remote Git repository 00:00:00.030 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.044 Using shallow fetch with depth 1 00:00:00.044 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.044 > git --version # timeout=10 00:00:00.071 > git --version # 'git version 2.39.2' 00:00:00.071 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.111 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.111 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.152 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.162 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.173 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:04.173 > git config core.sparsecheckout # timeout=10 00:00:04.183 > git read-tree -mu HEAD # timeout=10 00:00:04.198 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:04.213 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:04.214 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:04.355 [Pipeline] Start of Pipeline 00:00:04.369 [Pipeline] library 00:00:04.371 Loading library shm_lib@master 00:00:04.371 Library shm_lib@master is cached. Copying from home. 00:00:04.386 [Pipeline] node 00:00:04.409 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:04.410 [Pipeline] { 00:00:04.420 [Pipeline] catchError 00:00:04.422 [Pipeline] { 00:00:04.434 [Pipeline] wrap 00:00:04.443 [Pipeline] { 00:00:04.451 [Pipeline] stage 00:00:04.453 [Pipeline] { (Prologue) 00:00:04.474 [Pipeline] echo 00:00:04.475 Node: VM-host-WFP7 00:00:04.482 [Pipeline] cleanWs 00:00:04.493 [WS-CLEANUP] Deleting project workspace... 00:00:04.493 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.500 [WS-CLEANUP] done 00:00:04.700 [Pipeline] setCustomBuildProperty 00:00:04.794 [Pipeline] httpRequest 00:00:05.520 [Pipeline] echo 00:00:05.522 Sorcerer 10.211.164.101 is alive 00:00:05.531 [Pipeline] retry 00:00:05.534 [Pipeline] { 00:00:05.549 [Pipeline] httpRequest 00:00:05.553 HttpMethod: GET 00:00:05.553 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.554 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.555 Response Code: HTTP/1.1 200 OK 00:00:05.555 Success: Status code 200 is in the accepted range: 200,404 00:00:05.556 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.827 [Pipeline] } 00:00:05.871 [Pipeline] // retry 00:00:05.888 [Pipeline] sh 00:00:06.168 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.182 [Pipeline] httpRequest 00:00:09.205 [Pipeline] echo 00:00:09.207 Sorcerer 10.211.164.101 is dead 00:00:09.216 [Pipeline] httpRequest 00:00:09.725 [Pipeline] echo 00:00:09.727 Sorcerer 10.211.164.101 is alive 00:00:09.735 [Pipeline] retry 00:00:09.737 [Pipeline] { 00:00:09.751 [Pipeline] httpRequest 00:00:09.755 HttpMethod: GET 00:00:09.756 URL: http://10.211.164.101/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:09.756 Sending request to url: http://10.211.164.101/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:09.764 Response Code: HTTP/1.1 200 OK 00:00:09.764 Success: Status code 200 is in the accepted range: 200,404 00:00:09.765 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:26.456 [Pipeline] } 00:00:26.474 [Pipeline] // retry 00:00:26.483 [Pipeline] sh 00:00:26.780 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:29.416 [Pipeline] sh 00:00:29.704 + git -C spdk log --oneline -n5 00:00:29.704 b18e1bd62 version: v24.09.1-pre 00:00:29.704 19524ad45 version: v24.09 00:00:29.704 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:00:29.704 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:00:29.704 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:00:29.724 [Pipeline] withCredentials 00:00:29.737 > git --version # timeout=10 00:00:29.751 > git --version # 'git version 2.39.2' 00:00:29.771 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:29.773 [Pipeline] { 00:00:29.782 [Pipeline] retry 00:00:29.784 [Pipeline] { 00:00:29.800 [Pipeline] sh 00:00:30.085 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:30.358 [Pipeline] } 00:00:30.376 [Pipeline] // retry 00:00:30.382 [Pipeline] } 00:00:30.399 [Pipeline] // withCredentials 00:00:30.409 [Pipeline] httpRequest 00:00:31.141 [Pipeline] echo 00:00:31.143 Sorcerer 10.211.164.101 is alive 00:00:31.153 [Pipeline] retry 00:00:31.155 [Pipeline] { 00:00:31.169 [Pipeline] httpRequest 00:00:31.175 HttpMethod: GET 00:00:31.175 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:31.176 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:31.192 Response Code: HTTP/1.1 200 OK 00:00:31.193 Success: Status code 200 is in the accepted range: 200,404 00:00:31.193 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:29.381 [Pipeline] } 00:01:29.399 [Pipeline] // retry 00:01:29.406 [Pipeline] sh 00:01:29.689 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:31.080 [Pipeline] sh 00:01:31.363 + git -C dpdk log --oneline -n5 00:01:31.363 eeb0605f11 version: 23.11.0 00:01:31.363 238778122a doc: update release notes for 23.11 00:01:31.363 46aa6b3cfc doc: fix description of RSS features 00:01:31.363 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:31.363 7e421ae345 devtools: support skipping forbid rule check 00:01:31.384 [Pipeline] writeFile 00:01:31.398 [Pipeline] sh 00:01:31.683 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:31.695 [Pipeline] sh 00:01:31.979 + cat autorun-spdk.conf 00:01:31.979 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.979 SPDK_RUN_ASAN=1 00:01:31.979 SPDK_RUN_UBSAN=1 00:01:31.979 SPDK_TEST_RAID=1 00:01:31.979 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:31.979 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:31.979 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.987 RUN_NIGHTLY=1 00:01:31.989 [Pipeline] } 00:01:32.001 [Pipeline] // stage 00:01:32.015 [Pipeline] stage 00:01:32.017 [Pipeline] { (Run VM) 00:01:32.030 [Pipeline] sh 00:01:32.351 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:32.351 + echo 'Start stage prepare_nvme.sh' 00:01:32.351 Start stage prepare_nvme.sh 00:01:32.351 + [[ -n 3 ]] 00:01:32.351 + disk_prefix=ex3 00:01:32.351 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:32.351 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:32.351 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:32.351 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.351 ++ SPDK_RUN_ASAN=1 00:01:32.351 ++ SPDK_RUN_UBSAN=1 00:01:32.351 ++ SPDK_TEST_RAID=1 00:01:32.351 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:32.351 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:32.351 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:32.351 ++ RUN_NIGHTLY=1 00:01:32.351 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:32.351 + nvme_files=() 00:01:32.351 + declare -A nvme_files 00:01:32.351 + backend_dir=/var/lib/libvirt/images/backends 00:01:32.351 + nvme_files['nvme.img']=5G 00:01:32.351 + nvme_files['nvme-cmb.img']=5G 00:01:32.351 + nvme_files['nvme-multi0.img']=4G 00:01:32.351 + nvme_files['nvme-multi1.img']=4G 00:01:32.351 + nvme_files['nvme-multi2.img']=4G 00:01:32.351 + nvme_files['nvme-openstack.img']=8G 00:01:32.351 + nvme_files['nvme-zns.img']=5G 00:01:32.351 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:32.351 + (( SPDK_TEST_FTL == 1 )) 00:01:32.351 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:32.351 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:32.351 + for nvme in "${!nvme_files[@]}" 00:01:32.351 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:01:32.351 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:32.351 + for nvme in "${!nvme_files[@]}" 00:01:32.351 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:01:32.351 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.351 + for nvme in "${!nvme_files[@]}" 00:01:32.351 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:01:32.351 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:32.351 + for nvme in "${!nvme_files[@]}" 00:01:32.351 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:01:32.351 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.351 + for nvme in "${!nvme_files[@]}" 00:01:32.351 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:01:32.351 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:32.351 + for nvme in "${!nvme_files[@]}" 00:01:32.352 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:01:32.352 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:32.352 + for nvme in "${!nvme_files[@]}" 00:01:32.352 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:01:32.352 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.612 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:01:32.612 + echo 'End stage prepare_nvme.sh' 00:01:32.612 End stage prepare_nvme.sh 00:01:32.625 [Pipeline] sh 00:01:32.909 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:32.909 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora39 00:01:32.909 00:01:32.909 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:32.909 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:32.909 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:32.909 HELP=0 00:01:32.909 DRY_RUN=0 00:01:32.909 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:01:32.909 NVME_DISKS_TYPE=nvme,nvme, 00:01:32.909 NVME_AUTO_CREATE=0 00:01:32.909 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:01:32.909 NVME_CMB=,, 00:01:32.909 NVME_PMR=,, 00:01:32.909 NVME_ZNS=,, 00:01:32.909 NVME_MS=,, 00:01:32.909 NVME_FDP=,, 00:01:32.909 SPDK_VAGRANT_DISTRO=fedora39 00:01:32.909 SPDK_VAGRANT_VMCPU=10 00:01:32.909 SPDK_VAGRANT_VMRAM=12288 00:01:32.909 SPDK_VAGRANT_PROVIDER=libvirt 00:01:32.909 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:32.909 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:32.909 SPDK_OPENSTACK_NETWORK=0 00:01:32.909 VAGRANT_PACKAGE_BOX=0 00:01:32.909 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:32.909 FORCE_DISTRO=true 00:01:32.909 VAGRANT_BOX_VERSION= 00:01:32.909 EXTRA_VAGRANTFILES= 00:01:32.909 NIC_MODEL=virtio 00:01:32.909 00:01:32.909 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:32.909 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:34.822 Bringing machine 'default' up with 'libvirt' provider... 00:01:35.392 ==> default: Creating image (snapshot of base box volume). 00:01:35.651 ==> default: Creating domain with the following settings... 00:01:35.651 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731898898_2263fe9071c8adbfddbf 00:01:35.651 ==> default: -- Domain type: kvm 00:01:35.651 ==> default: -- Cpus: 10 00:01:35.651 ==> default: -- Feature: acpi 00:01:35.651 ==> default: -- Feature: apic 00:01:35.651 ==> default: -- Feature: pae 00:01:35.651 ==> default: -- Memory: 12288M 00:01:35.651 ==> default: -- Memory Backing: hugepages: 00:01:35.651 ==> default: -- Management MAC: 00:01:35.651 ==> default: -- Loader: 00:01:35.651 ==> default: -- Nvram: 00:01:35.651 ==> default: -- Base box: spdk/fedora39 00:01:35.651 ==> default: -- Storage pool: default 00:01:35.651 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731898898_2263fe9071c8adbfddbf.img (20G) 00:01:35.651 ==> default: -- Volume Cache: default 00:01:35.651 ==> default: -- Kernel: 00:01:35.651 ==> default: -- Initrd: 00:01:35.651 ==> default: -- Graphics Type: vnc 00:01:35.651 ==> default: -- Graphics Port: -1 00:01:35.652 ==> default: -- Graphics IP: 127.0.0.1 00:01:35.652 ==> default: -- Graphics Password: Not defined 00:01:35.652 ==> default: -- Video Type: cirrus 00:01:35.652 ==> default: -- Video VRAM: 9216 00:01:35.652 ==> default: -- Sound Type: 00:01:35.652 ==> default: -- Keymap: en-us 00:01:35.652 ==> default: -- TPM Path: 00:01:35.652 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:35.652 ==> default: -- Command line args: 00:01:35.652 ==> default: -> value=-device, 00:01:35.652 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:35.652 ==> default: -> value=-drive, 00:01:35.652 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:01:35.652 ==> default: -> value=-device, 00:01:35.652 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.652 ==> default: -> value=-device, 00:01:35.652 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:35.652 ==> default: -> value=-drive, 00:01:35.652 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:35.652 ==> default: -> value=-device, 00:01:35.652 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.652 ==> default: -> value=-drive, 00:01:35.652 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:35.652 ==> default: -> value=-device, 00:01:35.652 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.652 ==> default: -> value=-drive, 00:01:35.652 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:35.652 ==> default: -> value=-device, 00:01:35.652 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.652 ==> default: Creating shared folders metadata... 00:01:35.652 ==> default: Starting domain. 00:01:37.561 ==> default: Waiting for domain to get an IP address... 00:01:55.683 ==> default: Waiting for SSH to become available... 00:01:55.683 ==> default: Configuring and enabling network interfaces... 00:02:00.995 default: SSH address: 192.168.121.243:22 00:02:00.995 default: SSH username: vagrant 00:02:00.995 default: SSH auth method: private key 00:02:02.908 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:11.040 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:17.620 ==> default: Mounting SSHFS shared folder... 00:02:19.025 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:19.025 ==> default: Checking Mount.. 00:02:20.933 ==> default: Folder Successfully Mounted! 00:02:20.933 ==> default: Running provisioner: file... 00:02:21.874 default: ~/.gitconfig => .gitconfig 00:02:22.133 00:02:22.133 SUCCESS! 00:02:22.133 00:02:22.133 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:22.133 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:22.133 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:22.133 00:02:22.143 [Pipeline] } 00:02:22.157 [Pipeline] // stage 00:02:22.166 [Pipeline] dir 00:02:22.167 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:22.168 [Pipeline] { 00:02:22.182 [Pipeline] catchError 00:02:22.184 [Pipeline] { 00:02:22.196 [Pipeline] sh 00:02:22.480 + vagrant ssh-config --host vagrant 00:02:22.480 + sed -ne /^Host/,$p 00:02:22.480 + tee ssh_conf 00:02:25.021 Host vagrant 00:02:25.021 HostName 192.168.121.243 00:02:25.021 User vagrant 00:02:25.021 Port 22 00:02:25.021 UserKnownHostsFile /dev/null 00:02:25.021 StrictHostKeyChecking no 00:02:25.021 PasswordAuthentication no 00:02:25.021 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:25.021 IdentitiesOnly yes 00:02:25.021 LogLevel FATAL 00:02:25.021 ForwardAgent yes 00:02:25.021 ForwardX11 yes 00:02:25.021 00:02:25.035 [Pipeline] withEnv 00:02:25.037 [Pipeline] { 00:02:25.050 [Pipeline] sh 00:02:25.335 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:25.335 source /etc/os-release 00:02:25.335 [[ -e /image.version ]] && img=$(< /image.version) 00:02:25.335 # Minimal, systemd-like check. 00:02:25.335 if [[ -e /.dockerenv ]]; then 00:02:25.335 # Clear garbage from the node's name: 00:02:25.335 # agt-er_autotest_547-896 -> autotest_547-896 00:02:25.335 # $HOSTNAME is the actual container id 00:02:25.335 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:25.335 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:25.335 # We can assume this is a mount from a host where container is running, 00:02:25.335 # so fetch its hostname to easily identify the target swarm worker. 00:02:25.335 container="$(< /etc/hostname) ($agent)" 00:02:25.335 else 00:02:25.335 # Fallback 00:02:25.335 container=$agent 00:02:25.335 fi 00:02:25.335 fi 00:02:25.335 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:25.335 00:02:25.608 [Pipeline] } 00:02:25.625 [Pipeline] // withEnv 00:02:25.635 [Pipeline] setCustomBuildProperty 00:02:25.651 [Pipeline] stage 00:02:25.653 [Pipeline] { (Tests) 00:02:25.673 [Pipeline] sh 00:02:25.958 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:26.233 [Pipeline] sh 00:02:26.519 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:26.794 [Pipeline] timeout 00:02:26.795 Timeout set to expire in 1 hr 30 min 00:02:26.796 [Pipeline] { 00:02:26.810 [Pipeline] sh 00:02:27.095 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:27.667 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:02:27.681 [Pipeline] sh 00:02:27.966 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:28.243 [Pipeline] sh 00:02:28.528 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:28.803 [Pipeline] sh 00:02:29.085 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:29.345 ++ readlink -f spdk_repo 00:02:29.345 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:29.345 + [[ -n /home/vagrant/spdk_repo ]] 00:02:29.345 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:29.345 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:29.345 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:29.345 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:29.345 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:29.345 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:29.345 + cd /home/vagrant/spdk_repo 00:02:29.345 + source /etc/os-release 00:02:29.345 ++ NAME='Fedora Linux' 00:02:29.345 ++ VERSION='39 (Cloud Edition)' 00:02:29.345 ++ ID=fedora 00:02:29.345 ++ VERSION_ID=39 00:02:29.345 ++ VERSION_CODENAME= 00:02:29.345 ++ PLATFORM_ID=platform:f39 00:02:29.345 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:29.345 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:29.345 ++ LOGO=fedora-logo-icon 00:02:29.345 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:29.345 ++ HOME_URL=https://fedoraproject.org/ 00:02:29.345 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:29.345 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:29.345 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:29.345 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:29.345 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:29.345 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:29.345 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:29.345 ++ SUPPORT_END=2024-11-12 00:02:29.345 ++ VARIANT='Cloud Edition' 00:02:29.345 ++ VARIANT_ID=cloud 00:02:29.345 + uname -a 00:02:29.345 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:29.345 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:29.912 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:29.912 Hugepages 00:02:29.912 node hugesize free / total 00:02:29.912 node0 1048576kB 0 / 0 00:02:29.912 node0 2048kB 0 / 0 00:02:29.912 00:02:29.912 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:29.912 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:29.912 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:29.912 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:29.912 + rm -f /tmp/spdk-ld-path 00:02:29.912 + source autorun-spdk.conf 00:02:29.912 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:29.912 ++ SPDK_RUN_ASAN=1 00:02:29.912 ++ SPDK_RUN_UBSAN=1 00:02:29.912 ++ SPDK_TEST_RAID=1 00:02:29.912 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:29.912 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:29.912 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:29.912 ++ RUN_NIGHTLY=1 00:02:29.912 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:29.912 + [[ -n '' ]] 00:02:29.912 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:30.171 + for M in /var/spdk/build-*-manifest.txt 00:02:30.171 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:30.171 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:30.171 + for M in /var/spdk/build-*-manifest.txt 00:02:30.171 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:30.171 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:30.171 + for M in /var/spdk/build-*-manifest.txt 00:02:30.171 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:30.171 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:30.171 ++ uname 00:02:30.171 + [[ Linux == \L\i\n\u\x ]] 00:02:30.171 + sudo dmesg -T 00:02:30.171 + sudo dmesg --clear 00:02:30.171 + dmesg_pid=6167 00:02:30.171 + [[ Fedora Linux == FreeBSD ]] 00:02:30.171 + sudo dmesg -Tw 00:02:30.171 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:30.171 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:30.171 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:30.171 + [[ -x /usr/src/fio-static/fio ]] 00:02:30.171 + export FIO_BIN=/usr/src/fio-static/fio 00:02:30.171 + FIO_BIN=/usr/src/fio-static/fio 00:02:30.171 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:30.171 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:30.171 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:30.171 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:30.171 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:30.171 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:30.171 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:30.171 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:30.171 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:30.171 Test configuration: 00:02:30.171 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:30.171 SPDK_RUN_ASAN=1 00:02:30.171 SPDK_RUN_UBSAN=1 00:02:30.171 SPDK_TEST_RAID=1 00:02:30.171 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:30.171 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:30.171 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:30.430 RUN_NIGHTLY=1 03:02:33 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:30.430 03:02:33 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:30.430 03:02:33 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:30.430 03:02:33 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:30.430 03:02:33 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:30.430 03:02:33 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:30.430 03:02:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.430 03:02:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.430 03:02:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.430 03:02:33 -- paths/export.sh@5 -- $ export PATH 00:02:30.430 03:02:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.430 03:02:33 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:30.430 03:02:33 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:30.430 03:02:33 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1731898953.XXXXXX 00:02:30.430 03:02:33 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1731898953.sN6Mc9 00:02:30.430 03:02:33 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:30.430 03:02:33 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:02:30.430 03:02:33 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:30.430 03:02:33 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:30.430 03:02:33 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:30.430 03:02:33 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:30.430 03:02:33 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:30.430 03:02:33 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:30.430 03:02:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.430 03:02:33 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:30.430 03:02:33 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:30.430 03:02:33 -- pm/common@17 -- $ local monitor 00:02:30.430 03:02:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.430 03:02:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.430 03:02:33 -- pm/common@25 -- $ sleep 1 00:02:30.430 03:02:33 -- pm/common@21 -- $ date +%s 00:02:30.430 03:02:33 -- pm/common@21 -- $ date +%s 00:02:30.430 03:02:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731898953 00:02:30.430 03:02:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731898953 00:02:30.430 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731898953_collect-vmstat.pm.log 00:02:30.430 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731898953_collect-cpu-load.pm.log 00:02:31.366 03:02:34 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:31.366 03:02:34 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:31.366 03:02:34 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:31.366 03:02:34 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:31.366 03:02:34 -- spdk/autobuild.sh@16 -- $ date -u 00:02:31.366 Mon Nov 18 03:02:34 AM UTC 2024 00:02:31.366 03:02:34 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:31.366 v24.09-rc1-9-gb18e1bd62 00:02:31.366 03:02:34 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:31.366 03:02:34 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:31.366 03:02:34 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:31.366 03:02:34 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:31.366 03:02:34 -- common/autotest_common.sh@10 -- $ set +x 00:02:31.366 ************************************ 00:02:31.366 START TEST asan 00:02:31.366 ************************************ 00:02:31.366 using asan 00:02:31.366 03:02:34 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:31.366 00:02:31.366 real 0m0.000s 00:02:31.366 user 0m0.000s 00:02:31.366 sys 0m0.000s 00:02:31.366 03:02:34 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:31.366 03:02:34 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:31.366 ************************************ 00:02:31.366 END TEST asan 00:02:31.366 ************************************ 00:02:31.366 03:02:34 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:31.366 03:02:34 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:31.366 03:02:34 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:31.366 03:02:34 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:31.366 03:02:34 -- common/autotest_common.sh@10 -- $ set +x 00:02:31.366 ************************************ 00:02:31.366 START TEST ubsan 00:02:31.366 ************************************ 00:02:31.366 using ubsan 00:02:31.366 03:02:34 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:31.366 00:02:31.366 real 0m0.000s 00:02:31.366 user 0m0.000s 00:02:31.366 sys 0m0.000s 00:02:31.366 03:02:34 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:31.366 03:02:34 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:31.366 ************************************ 00:02:31.366 END TEST ubsan 00:02:31.366 ************************************ 00:02:31.629 03:02:34 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:31.629 03:02:34 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:31.629 03:02:34 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:31.629 03:02:34 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:31.629 03:02:34 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:31.629 03:02:34 -- common/autotest_common.sh@10 -- $ set +x 00:02:31.629 ************************************ 00:02:31.629 START TEST build_native_dpdk 00:02:31.629 ************************************ 00:02:31.629 03:02:34 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:31.630 03:02:34 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:31.630 03:02:34 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:31.630 03:02:34 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:31.630 03:02:34 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:31.630 03:02:34 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:31.630 03:02:34 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:31.630 03:02:34 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:31.630 03:02:34 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:31.630 03:02:34 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:31.630 03:02:34 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:31.630 03:02:34 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:31.630 03:02:34 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:31.630 03:02:34 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:31.630 03:02:34 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:31.630 03:02:34 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:31.630 03:02:34 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:31.630 03:02:34 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:31.630 03:02:34 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:31.630 03:02:34 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:31.630 03:02:34 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:31.630 eeb0605f11 version: 23.11.0 00:02:31.630 238778122a doc: update release notes for 23.11 00:02:31.630 46aa6b3cfc doc: fix description of RSS features 00:02:31.630 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:31.630 7e421ae345 devtools: support skipping forbid rule check 00:02:31.630 03:02:35 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:31.630 03:02:35 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:31.630 03:02:35 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:31.630 03:02:35 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:31.630 03:02:35 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:31.630 03:02:35 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:31.630 03:02:35 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:31.630 03:02:35 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:31.630 03:02:35 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:31.630 03:02:35 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:31.630 03:02:35 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:31.630 03:02:35 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:31.630 03:02:35 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:31.630 03:02:35 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:31.630 03:02:35 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:31.630 03:02:35 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:31.630 03:02:35 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:31.630 03:02:35 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:31.630 03:02:35 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:31.630 patching file config/rte_config.h 00:02:31.630 Hunk #1 succeeded at 60 (offset 1 line). 00:02:31.630 03:02:35 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:31.630 03:02:35 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:31.630 patching file lib/pcapng/rte_pcapng.c 00:02:31.630 03:02:35 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:31.630 03:02:35 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:31.631 03:02:35 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:31.631 03:02:35 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:31.631 03:02:35 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:31.631 03:02:35 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:31.631 03:02:35 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:31.631 03:02:35 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:31.631 03:02:35 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:31.631 03:02:35 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:31.631 03:02:35 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:31.631 03:02:35 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:31.631 03:02:35 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:31.631 03:02:35 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:31.631 03:02:35 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:31.631 03:02:35 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:31.631 03:02:35 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:31.631 03:02:35 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:31.631 03:02:35 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:31.631 03:02:35 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:31.631 03:02:35 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:31.631 03:02:35 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:31.631 03:02:35 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:31.631 03:02:35 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:31.631 03:02:35 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:31.631 03:02:35 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:31.631 03:02:35 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:36.938 The Meson build system 00:02:36.938 Version: 1.5.0 00:02:36.938 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:36.938 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:36.938 Build type: native build 00:02:36.938 Program cat found: YES (/usr/bin/cat) 00:02:36.938 Project name: DPDK 00:02:36.938 Project version: 23.11.0 00:02:36.938 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:36.938 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:36.938 Host machine cpu family: x86_64 00:02:36.938 Host machine cpu: x86_64 00:02:36.938 Message: ## Building in Developer Mode ## 00:02:36.938 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:36.938 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:36.938 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:36.938 Program python3 found: YES (/usr/bin/python3) 00:02:36.938 Program cat found: YES (/usr/bin/cat) 00:02:36.938 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:36.938 Compiler for C supports arguments -march=native: YES 00:02:36.938 Checking for size of "void *" : 8 00:02:36.938 Checking for size of "void *" : 8 (cached) 00:02:36.938 Library m found: YES 00:02:36.938 Library numa found: YES 00:02:36.938 Has header "numaif.h" : YES 00:02:36.938 Library fdt found: NO 00:02:36.938 Library execinfo found: NO 00:02:36.938 Has header "execinfo.h" : YES 00:02:36.938 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:36.938 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:36.938 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:36.938 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:36.938 Run-time dependency openssl found: YES 3.1.1 00:02:36.938 Run-time dependency libpcap found: YES 1.10.4 00:02:36.938 Has header "pcap.h" with dependency libpcap: YES 00:02:36.938 Compiler for C supports arguments -Wcast-qual: YES 00:02:36.938 Compiler for C supports arguments -Wdeprecated: YES 00:02:36.938 Compiler for C supports arguments -Wformat: YES 00:02:36.938 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:36.938 Compiler for C supports arguments -Wformat-security: NO 00:02:36.938 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:36.938 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:36.938 Compiler for C supports arguments -Wnested-externs: YES 00:02:36.938 Compiler for C supports arguments -Wold-style-definition: YES 00:02:36.938 Compiler for C supports arguments -Wpointer-arith: YES 00:02:36.938 Compiler for C supports arguments -Wsign-compare: YES 00:02:36.938 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:36.938 Compiler for C supports arguments -Wundef: YES 00:02:36.938 Compiler for C supports arguments -Wwrite-strings: YES 00:02:36.938 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:36.938 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:36.938 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:36.938 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:36.938 Program objdump found: YES (/usr/bin/objdump) 00:02:36.938 Compiler for C supports arguments -mavx512f: YES 00:02:36.938 Checking if "AVX512 checking" compiles: YES 00:02:36.938 Fetching value of define "__SSE4_2__" : 1 00:02:36.938 Fetching value of define "__AES__" : 1 00:02:36.938 Fetching value of define "__AVX__" : 1 00:02:36.938 Fetching value of define "__AVX2__" : 1 00:02:36.938 Fetching value of define "__AVX512BW__" : 1 00:02:36.938 Fetching value of define "__AVX512CD__" : 1 00:02:36.938 Fetching value of define "__AVX512DQ__" : 1 00:02:36.938 Fetching value of define "__AVX512F__" : 1 00:02:36.938 Fetching value of define "__AVX512VL__" : 1 00:02:36.938 Fetching value of define "__PCLMUL__" : 1 00:02:36.938 Fetching value of define "__RDRND__" : 1 00:02:36.938 Fetching value of define "__RDSEED__" : 1 00:02:36.938 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:36.938 Fetching value of define "__znver1__" : (undefined) 00:02:36.938 Fetching value of define "__znver2__" : (undefined) 00:02:36.938 Fetching value of define "__znver3__" : (undefined) 00:02:36.938 Fetching value of define "__znver4__" : (undefined) 00:02:36.938 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:36.938 Message: lib/log: Defining dependency "log" 00:02:36.938 Message: lib/kvargs: Defining dependency "kvargs" 00:02:36.938 Message: lib/telemetry: Defining dependency "telemetry" 00:02:36.938 Checking for function "getentropy" : NO 00:02:36.938 Message: lib/eal: Defining dependency "eal" 00:02:36.938 Message: lib/ring: Defining dependency "ring" 00:02:36.938 Message: lib/rcu: Defining dependency "rcu" 00:02:36.938 Message: lib/mempool: Defining dependency "mempool" 00:02:36.938 Message: lib/mbuf: Defining dependency "mbuf" 00:02:36.938 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:36.938 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:36.938 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:36.938 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:36.938 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:36.938 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:36.938 Compiler for C supports arguments -mpclmul: YES 00:02:36.938 Compiler for C supports arguments -maes: YES 00:02:36.938 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:36.938 Compiler for C supports arguments -mavx512bw: YES 00:02:36.938 Compiler for C supports arguments -mavx512dq: YES 00:02:36.938 Compiler for C supports arguments -mavx512vl: YES 00:02:36.938 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:36.938 Compiler for C supports arguments -mavx2: YES 00:02:36.938 Compiler for C supports arguments -mavx: YES 00:02:36.938 Message: lib/net: Defining dependency "net" 00:02:36.938 Message: lib/meter: Defining dependency "meter" 00:02:36.938 Message: lib/ethdev: Defining dependency "ethdev" 00:02:36.938 Message: lib/pci: Defining dependency "pci" 00:02:36.938 Message: lib/cmdline: Defining dependency "cmdline" 00:02:36.938 Message: lib/metrics: Defining dependency "metrics" 00:02:36.938 Message: lib/hash: Defining dependency "hash" 00:02:36.938 Message: lib/timer: Defining dependency "timer" 00:02:36.938 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:36.938 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:36.938 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:36.938 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:36.938 Message: lib/acl: Defining dependency "acl" 00:02:36.938 Message: lib/bbdev: Defining dependency "bbdev" 00:02:36.938 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:36.938 Run-time dependency libelf found: YES 0.191 00:02:36.938 Message: lib/bpf: Defining dependency "bpf" 00:02:36.938 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:36.938 Message: lib/compressdev: Defining dependency "compressdev" 00:02:36.938 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:36.938 Message: lib/distributor: Defining dependency "distributor" 00:02:36.938 Message: lib/dmadev: Defining dependency "dmadev" 00:02:36.938 Message: lib/efd: Defining dependency "efd" 00:02:36.938 Message: lib/eventdev: Defining dependency "eventdev" 00:02:36.938 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:36.938 Message: lib/gpudev: Defining dependency "gpudev" 00:02:36.938 Message: lib/gro: Defining dependency "gro" 00:02:36.938 Message: lib/gso: Defining dependency "gso" 00:02:36.939 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:36.939 Message: lib/jobstats: Defining dependency "jobstats" 00:02:36.939 Message: lib/latencystats: Defining dependency "latencystats" 00:02:36.939 Message: lib/lpm: Defining dependency "lpm" 00:02:36.939 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:36.939 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:36.939 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:36.939 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:36.939 Message: lib/member: Defining dependency "member" 00:02:36.939 Message: lib/pcapng: Defining dependency "pcapng" 00:02:36.939 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:36.939 Message: lib/power: Defining dependency "power" 00:02:36.939 Message: lib/rawdev: Defining dependency "rawdev" 00:02:36.939 Message: lib/regexdev: Defining dependency "regexdev" 00:02:36.939 Message: lib/mldev: Defining dependency "mldev" 00:02:36.939 Message: lib/rib: Defining dependency "rib" 00:02:36.939 Message: lib/reorder: Defining dependency "reorder" 00:02:36.939 Message: lib/sched: Defining dependency "sched" 00:02:36.939 Message: lib/security: Defining dependency "security" 00:02:36.939 Message: lib/stack: Defining dependency "stack" 00:02:36.939 Has header "linux/userfaultfd.h" : YES 00:02:36.939 Has header "linux/vduse.h" : YES 00:02:36.939 Message: lib/vhost: Defining dependency "vhost" 00:02:36.939 Message: lib/ipsec: Defining dependency "ipsec" 00:02:36.939 Message: lib/pdcp: Defining dependency "pdcp" 00:02:36.939 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:36.939 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:36.939 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:36.939 Message: lib/fib: Defining dependency "fib" 00:02:36.939 Message: lib/port: Defining dependency "port" 00:02:36.939 Message: lib/pdump: Defining dependency "pdump" 00:02:36.939 Message: lib/table: Defining dependency "table" 00:02:36.939 Message: lib/pipeline: Defining dependency "pipeline" 00:02:36.939 Message: lib/graph: Defining dependency "graph" 00:02:36.939 Message: lib/node: Defining dependency "node" 00:02:36.939 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:36.939 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:36.939 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:38.845 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:38.846 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:38.846 Compiler for C supports arguments -Wno-unused-value: YES 00:02:38.846 Compiler for C supports arguments -Wno-format: YES 00:02:38.846 Compiler for C supports arguments -Wno-format-security: YES 00:02:38.846 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:38.846 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:38.846 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:38.846 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:38.846 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:38.846 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:38.846 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:38.846 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:38.846 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:38.846 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:38.846 Has header "sys/epoll.h" : YES 00:02:38.846 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:38.846 Configuring doxy-api-html.conf using configuration 00:02:38.846 Configuring doxy-api-man.conf using configuration 00:02:38.846 Program mandb found: YES (/usr/bin/mandb) 00:02:38.846 Program sphinx-build found: NO 00:02:38.846 Configuring rte_build_config.h using configuration 00:02:38.846 Message: 00:02:38.846 ================= 00:02:38.846 Applications Enabled 00:02:38.846 ================= 00:02:38.846 00:02:38.846 apps: 00:02:38.846 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:38.846 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:38.846 test-pmd, test-regex, test-sad, test-security-perf, 00:02:38.846 00:02:38.846 Message: 00:02:38.846 ================= 00:02:38.846 Libraries Enabled 00:02:38.846 ================= 00:02:38.846 00:02:38.846 libs: 00:02:38.846 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:38.846 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:38.846 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:38.846 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:38.846 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:38.846 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:38.846 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:38.846 00:02:38.846 00:02:38.846 Message: 00:02:38.846 =============== 00:02:38.846 Drivers Enabled 00:02:38.846 =============== 00:02:38.846 00:02:38.846 common: 00:02:38.846 00:02:38.846 bus: 00:02:38.846 pci, vdev, 00:02:38.846 mempool: 00:02:38.846 ring, 00:02:38.846 dma: 00:02:38.846 00:02:38.846 net: 00:02:38.846 i40e, 00:02:38.846 raw: 00:02:38.846 00:02:38.846 crypto: 00:02:38.846 00:02:38.846 compress: 00:02:38.846 00:02:38.846 regex: 00:02:38.846 00:02:38.846 ml: 00:02:38.846 00:02:38.846 vdpa: 00:02:38.846 00:02:38.846 event: 00:02:38.846 00:02:38.846 baseband: 00:02:38.846 00:02:38.846 gpu: 00:02:38.846 00:02:38.846 00:02:38.846 Message: 00:02:38.846 ================= 00:02:38.846 Content Skipped 00:02:38.846 ================= 00:02:38.846 00:02:38.846 apps: 00:02:38.846 00:02:38.846 libs: 00:02:38.846 00:02:38.846 drivers: 00:02:38.846 common/cpt: not in enabled drivers build config 00:02:38.846 common/dpaax: not in enabled drivers build config 00:02:38.846 common/iavf: not in enabled drivers build config 00:02:38.846 common/idpf: not in enabled drivers build config 00:02:38.846 common/mvep: not in enabled drivers build config 00:02:38.846 common/octeontx: not in enabled drivers build config 00:02:38.846 bus/auxiliary: not in enabled drivers build config 00:02:38.846 bus/cdx: not in enabled drivers build config 00:02:38.846 bus/dpaa: not in enabled drivers build config 00:02:38.846 bus/fslmc: not in enabled drivers build config 00:02:38.846 bus/ifpga: not in enabled drivers build config 00:02:38.846 bus/platform: not in enabled drivers build config 00:02:38.846 bus/vmbus: not in enabled drivers build config 00:02:38.846 common/cnxk: not in enabled drivers build config 00:02:38.846 common/mlx5: not in enabled drivers build config 00:02:38.846 common/nfp: not in enabled drivers build config 00:02:38.846 common/qat: not in enabled drivers build config 00:02:38.846 common/sfc_efx: not in enabled drivers build config 00:02:38.846 mempool/bucket: not in enabled drivers build config 00:02:38.846 mempool/cnxk: not in enabled drivers build config 00:02:38.846 mempool/dpaa: not in enabled drivers build config 00:02:38.846 mempool/dpaa2: not in enabled drivers build config 00:02:38.846 mempool/octeontx: not in enabled drivers build config 00:02:38.846 mempool/stack: not in enabled drivers build config 00:02:38.846 dma/cnxk: not in enabled drivers build config 00:02:38.846 dma/dpaa: not in enabled drivers build config 00:02:38.846 dma/dpaa2: not in enabled drivers build config 00:02:38.846 dma/hisilicon: not in enabled drivers build config 00:02:38.846 dma/idxd: not in enabled drivers build config 00:02:38.846 dma/ioat: not in enabled drivers build config 00:02:38.846 dma/skeleton: not in enabled drivers build config 00:02:38.846 net/af_packet: not in enabled drivers build config 00:02:38.846 net/af_xdp: not in enabled drivers build config 00:02:38.846 net/ark: not in enabled drivers build config 00:02:38.846 net/atlantic: not in enabled drivers build config 00:02:38.846 net/avp: not in enabled drivers build config 00:02:38.846 net/axgbe: not in enabled drivers build config 00:02:38.846 net/bnx2x: not in enabled drivers build config 00:02:38.846 net/bnxt: not in enabled drivers build config 00:02:38.846 net/bonding: not in enabled drivers build config 00:02:38.846 net/cnxk: not in enabled drivers build config 00:02:38.846 net/cpfl: not in enabled drivers build config 00:02:38.846 net/cxgbe: not in enabled drivers build config 00:02:38.846 net/dpaa: not in enabled drivers build config 00:02:38.846 net/dpaa2: not in enabled drivers build config 00:02:38.846 net/e1000: not in enabled drivers build config 00:02:38.846 net/ena: not in enabled drivers build config 00:02:38.846 net/enetc: not in enabled drivers build config 00:02:38.846 net/enetfec: not in enabled drivers build config 00:02:38.846 net/enic: not in enabled drivers build config 00:02:38.846 net/failsafe: not in enabled drivers build config 00:02:38.846 net/fm10k: not in enabled drivers build config 00:02:38.846 net/gve: not in enabled drivers build config 00:02:38.846 net/hinic: not in enabled drivers build config 00:02:38.846 net/hns3: not in enabled drivers build config 00:02:38.846 net/iavf: not in enabled drivers build config 00:02:38.846 net/ice: not in enabled drivers build config 00:02:38.846 net/idpf: not in enabled drivers build config 00:02:38.846 net/igc: not in enabled drivers build config 00:02:38.846 net/ionic: not in enabled drivers build config 00:02:38.846 net/ipn3ke: not in enabled drivers build config 00:02:38.846 net/ixgbe: not in enabled drivers build config 00:02:38.846 net/mana: not in enabled drivers build config 00:02:38.846 net/memif: not in enabled drivers build config 00:02:38.846 net/mlx4: not in enabled drivers build config 00:02:38.846 net/mlx5: not in enabled drivers build config 00:02:38.846 net/mvneta: not in enabled drivers build config 00:02:38.846 net/mvpp2: not in enabled drivers build config 00:02:38.846 net/netvsc: not in enabled drivers build config 00:02:38.846 net/nfb: not in enabled drivers build config 00:02:38.846 net/nfp: not in enabled drivers build config 00:02:38.846 net/ngbe: not in enabled drivers build config 00:02:38.846 net/null: not in enabled drivers build config 00:02:38.846 net/octeontx: not in enabled drivers build config 00:02:38.846 net/octeon_ep: not in enabled drivers build config 00:02:38.846 net/pcap: not in enabled drivers build config 00:02:38.846 net/pfe: not in enabled drivers build config 00:02:38.846 net/qede: not in enabled drivers build config 00:02:38.846 net/ring: not in enabled drivers build config 00:02:38.846 net/sfc: not in enabled drivers build config 00:02:38.846 net/softnic: not in enabled drivers build config 00:02:38.846 net/tap: not in enabled drivers build config 00:02:38.846 net/thunderx: not in enabled drivers build config 00:02:38.846 net/txgbe: not in enabled drivers build config 00:02:38.846 net/vdev_netvsc: not in enabled drivers build config 00:02:38.846 net/vhost: not in enabled drivers build config 00:02:38.846 net/virtio: not in enabled drivers build config 00:02:38.846 net/vmxnet3: not in enabled drivers build config 00:02:38.846 raw/cnxk_bphy: not in enabled drivers build config 00:02:38.846 raw/cnxk_gpio: not in enabled drivers build config 00:02:38.846 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:38.846 raw/ifpga: not in enabled drivers build config 00:02:38.846 raw/ntb: not in enabled drivers build config 00:02:38.846 raw/skeleton: not in enabled drivers build config 00:02:38.846 crypto/armv8: not in enabled drivers build config 00:02:38.846 crypto/bcmfs: not in enabled drivers build config 00:02:38.846 crypto/caam_jr: not in enabled drivers build config 00:02:38.846 crypto/ccp: not in enabled drivers build config 00:02:38.846 crypto/cnxk: not in enabled drivers build config 00:02:38.846 crypto/dpaa_sec: not in enabled drivers build config 00:02:38.846 crypto/dpaa2_sec: not in enabled drivers build config 00:02:38.846 crypto/ipsec_mb: not in enabled drivers build config 00:02:38.846 crypto/mlx5: not in enabled drivers build config 00:02:38.846 crypto/mvsam: not in enabled drivers build config 00:02:38.846 crypto/nitrox: not in enabled drivers build config 00:02:38.846 crypto/null: not in enabled drivers build config 00:02:38.846 crypto/octeontx: not in enabled drivers build config 00:02:38.846 crypto/openssl: not in enabled drivers build config 00:02:38.846 crypto/scheduler: not in enabled drivers build config 00:02:38.846 crypto/uadk: not in enabled drivers build config 00:02:38.847 crypto/virtio: not in enabled drivers build config 00:02:38.847 compress/isal: not in enabled drivers build config 00:02:38.847 compress/mlx5: not in enabled drivers build config 00:02:38.847 compress/octeontx: not in enabled drivers build config 00:02:38.847 compress/zlib: not in enabled drivers build config 00:02:38.847 regex/mlx5: not in enabled drivers build config 00:02:38.847 regex/cn9k: not in enabled drivers build config 00:02:38.847 ml/cnxk: not in enabled drivers build config 00:02:38.847 vdpa/ifc: not in enabled drivers build config 00:02:38.847 vdpa/mlx5: not in enabled drivers build config 00:02:38.847 vdpa/nfp: not in enabled drivers build config 00:02:38.847 vdpa/sfc: not in enabled drivers build config 00:02:38.847 event/cnxk: not in enabled drivers build config 00:02:38.847 event/dlb2: not in enabled drivers build config 00:02:38.847 event/dpaa: not in enabled drivers build config 00:02:38.847 event/dpaa2: not in enabled drivers build config 00:02:38.847 event/dsw: not in enabled drivers build config 00:02:38.847 event/opdl: not in enabled drivers build config 00:02:38.847 event/skeleton: not in enabled drivers build config 00:02:38.847 event/sw: not in enabled drivers build config 00:02:38.847 event/octeontx: not in enabled drivers build config 00:02:38.847 baseband/acc: not in enabled drivers build config 00:02:38.847 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:38.847 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:38.847 baseband/la12xx: not in enabled drivers build config 00:02:38.847 baseband/null: not in enabled drivers build config 00:02:38.847 baseband/turbo_sw: not in enabled drivers build config 00:02:38.847 gpu/cuda: not in enabled drivers build config 00:02:38.847 00:02:38.847 00:02:38.847 Build targets in project: 217 00:02:38.847 00:02:38.847 DPDK 23.11.0 00:02:38.847 00:02:38.847 User defined options 00:02:38.847 libdir : lib 00:02:38.847 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:38.847 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:38.847 c_link_args : 00:02:38.847 enable_docs : false 00:02:38.847 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:38.847 enable_kmods : false 00:02:38.847 machine : native 00:02:38.847 tests : false 00:02:38.847 00:02:38.847 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:38.847 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:38.847 03:02:42 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:38.847 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:38.847 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:38.847 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:38.847 [3/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:38.847 [4/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:39.107 [5/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:39.107 [6/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:39.107 [7/707] Linking static target lib/librte_kvargs.a 00:02:39.107 [8/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:39.107 [9/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:39.107 [10/707] Linking static target lib/librte_log.a 00:02:39.107 [11/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.367 [12/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:39.367 [13/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:39.367 [14/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:39.367 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:39.367 [16/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:39.367 [17/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:39.367 [18/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.367 [19/707] Linking target lib/librte_log.so.24.0 00:02:39.627 [20/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:39.627 [21/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:39.627 [22/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:39.627 [23/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:39.627 [24/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:39.627 [25/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:39.888 [26/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:39.888 [27/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:39.888 [28/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:39.888 [29/707] Linking target lib/librte_kvargs.so.24.0 00:02:39.888 [30/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:39.888 [31/707] Linking static target lib/librte_telemetry.a 00:02:39.888 [32/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:39.888 [33/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:39.888 [34/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:40.148 [35/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:40.148 [36/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:40.148 [37/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:40.148 [38/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:40.148 [39/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:40.148 [40/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:40.148 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:40.148 [42/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.148 [43/707] Linking target lib/librte_telemetry.so.24.0 00:02:40.148 [44/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:40.408 [45/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:40.408 [46/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:40.408 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:40.408 [48/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:40.408 [49/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:40.669 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:40.669 [51/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:40.669 [52/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:40.669 [53/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:40.669 [54/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:40.669 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:40.669 [56/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:40.669 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:40.669 [58/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:40.669 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:40.930 [60/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:40.930 [61/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:40.930 [62/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:40.930 [63/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:40.930 [64/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:40.930 [65/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:40.930 [66/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:40.930 [67/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:40.930 [68/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:41.190 [69/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:41.190 [70/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:41.190 [71/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:41.190 [72/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:41.190 [73/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:41.191 [74/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:41.191 [75/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:41.191 [76/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:41.191 [77/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:41.191 [78/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:41.451 [79/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:41.451 [80/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:41.451 [81/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:41.451 [82/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:41.719 [83/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:41.719 [84/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:41.719 [85/707] Linking static target lib/librte_ring.a 00:02:41.719 [86/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:41.719 [87/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:41.719 [88/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:41.719 [89/707] Linking static target lib/librte_eal.a 00:02:41.997 [90/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:41.997 [91/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.997 [92/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:41.997 [93/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:41.997 [94/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:41.997 [95/707] Linking static target lib/librte_mempool.a 00:02:42.257 [96/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:42.257 [97/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:42.257 [98/707] Linking static target lib/librte_rcu.a 00:02:42.257 [99/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:42.257 [100/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:42.257 [101/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:42.257 [102/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:42.257 [103/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:42.516 [104/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:42.516 [105/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.516 [106/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.516 [107/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:42.516 [108/707] Linking static target lib/librte_net.a 00:02:42.776 [109/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:42.776 [110/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:42.776 [111/707] Linking static target lib/librte_mbuf.a 00:02:42.776 [112/707] Linking static target lib/librte_meter.a 00:02:42.776 [113/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.776 [114/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:42.776 [115/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:42.776 [116/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:42.776 [117/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.776 [118/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:43.036 [119/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.297 [120/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:43.297 [121/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:43.559 [122/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:43.559 [123/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:43.559 [124/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:43.559 [125/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:43.559 [126/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:43.559 [127/707] Linking static target lib/librte_pci.a 00:02:43.559 [128/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:43.818 [129/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:43.818 [130/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:43.818 [131/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:43.818 [132/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:43.818 [133/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.818 [134/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:43.818 [135/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:43.818 [136/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:43.818 [137/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:43.818 [138/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:43.818 [139/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:44.078 [140/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:44.078 [141/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:44.078 [142/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:44.078 [143/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:44.078 [144/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:44.078 [145/707] Linking static target lib/librte_cmdline.a 00:02:44.338 [146/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:44.338 [147/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:44.338 [148/707] Linking static target lib/librte_metrics.a 00:02:44.338 [149/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:44.338 [150/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:44.596 [151/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.596 [152/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:44.596 [153/707] Linking static target lib/librte_timer.a 00:02:44.856 [154/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.856 [155/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:44.856 [156/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.115 [157/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:45.115 [158/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:45.115 [159/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:45.374 [160/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:45.633 [161/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:45.633 [162/707] Linking static target lib/librte_bitratestats.a 00:02:45.633 [163/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:45.633 [164/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.893 [165/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:45.893 [166/707] Linking static target lib/librte_bbdev.a 00:02:45.893 [167/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:46.152 [168/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:46.152 [169/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:46.152 [170/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:46.152 [171/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:46.411 [172/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.411 [173/707] Linking static target lib/librte_hash.a 00:02:46.411 [174/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:46.411 [175/707] Linking static target lib/librte_ethdev.a 00:02:46.411 [176/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:46.670 [177/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:46.670 [178/707] Linking static target lib/acl/libavx2_tmp.a 00:02:46.670 [179/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:46.670 [180/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.670 [181/707] Linking target lib/librte_eal.so.24.0 00:02:46.670 [182/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.670 [183/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:46.670 [184/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:46.930 [185/707] Linking target lib/librte_meter.so.24.0 00:02:46.930 [186/707] Linking target lib/librte_ring.so.24.0 00:02:46.930 [187/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:46.930 [188/707] Linking target lib/librte_pci.so.24.0 00:02:46.930 [189/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:46.930 [190/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:46.930 [191/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:46.930 [192/707] Linking target lib/librte_timer.so.24.0 00:02:46.930 [193/707] Linking target lib/librte_rcu.so.24.0 00:02:46.930 [194/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:46.930 [195/707] Linking static target lib/librte_cfgfile.a 00:02:46.930 [196/707] Linking target lib/librte_mempool.so.24.0 00:02:46.930 [197/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:47.189 [198/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:47.189 [199/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:47.189 [200/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:47.189 [201/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:47.189 [202/707] Linking target lib/librte_mbuf.so.24.0 00:02:47.189 [203/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.189 [204/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:47.189 [205/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:47.189 [206/707] Linking target lib/librte_cfgfile.so.24.0 00:02:47.189 [207/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:47.189 [208/707] Linking target lib/librte_net.so.24.0 00:02:47.189 [209/707] Linking static target lib/librte_bpf.a 00:02:47.189 [210/707] Linking target lib/librte_bbdev.so.24.0 00:02:47.448 [211/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:47.448 [212/707] Linking target lib/librte_cmdline.so.24.0 00:02:47.448 [213/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:47.448 [214/707] Linking target lib/librte_hash.so.24.0 00:02:47.448 [215/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:47.448 [216/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.448 [217/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:47.449 [218/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:47.449 [219/707] Linking static target lib/librte_compressdev.a 00:02:47.449 [220/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:47.449 [221/707] Linking static target lib/librte_acl.a 00:02:47.708 [222/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:47.708 [223/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.708 [224/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:47.967 [225/707] Linking target lib/librte_acl.so.24.0 00:02:47.967 [226/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:47.967 [227/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:47.967 [228/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.967 [229/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:47.967 [230/707] Linking target lib/librte_compressdev.so.24.0 00:02:47.967 [231/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:47.967 [232/707] Linking static target lib/librte_distributor.a 00:02:48.227 [233/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:48.227 [234/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:48.227 [235/707] Linking static target lib/librte_dmadev.a 00:02:48.227 [236/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.227 [237/707] Linking target lib/librte_distributor.so.24.0 00:02:48.494 [238/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.494 [239/707] Linking target lib/librte_dmadev.so.24.0 00:02:48.494 [240/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:48.768 [241/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:48.768 [242/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:48.768 [243/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:48.768 [244/707] Linking static target lib/librte_efd.a 00:02:48.768 [245/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:49.028 [246/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.028 [247/707] Linking target lib/librte_efd.so.24.0 00:02:49.028 [248/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:49.288 [249/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:49.288 [250/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:49.288 [251/707] Linking static target lib/librte_cryptodev.a 00:02:49.288 [252/707] Linking static target lib/librte_dispatcher.a 00:02:49.288 [253/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:49.548 [254/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:49.548 [255/707] Linking static target lib/librte_gpudev.a 00:02:49.548 [256/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:49.548 [257/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.548 [258/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:49.548 [259/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:49.808 [260/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:50.069 [261/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:50.069 [262/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:50.069 [263/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.069 [264/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:50.069 [265/707] Linking target lib/librte_gpudev.so.24.0 00:02:50.069 [266/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:50.069 [267/707] Linking static target lib/librte_gro.a 00:02:50.329 [268/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:50.329 [269/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.329 [270/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.329 [271/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:50.329 [272/707] Linking target lib/librte_cryptodev.so.24.0 00:02:50.329 [273/707] Linking target lib/librte_ethdev.so.24.0 00:02:50.329 [274/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.329 [275/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:50.329 [276/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:50.329 [277/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:50.329 [278/707] Linking target lib/librte_metrics.so.24.0 00:02:50.589 [279/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:50.589 [280/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:50.589 [281/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:50.589 [282/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:50.589 [283/707] Linking static target lib/librte_gso.a 00:02:50.589 [284/707] Linking target lib/librte_gro.so.24.0 00:02:50.589 [285/707] Linking target lib/librte_bpf.so.24.0 00:02:50.589 [286/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:50.589 [287/707] Linking target lib/librte_bitratestats.so.24.0 00:02:50.589 [288/707] Linking static target lib/librte_eventdev.a 00:02:50.589 [289/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:50.589 [290/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.849 [291/707] Linking target lib/librte_gso.so.24.0 00:02:50.849 [292/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:50.849 [293/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:50.849 [294/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:50.849 [295/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:50.849 [296/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:50.849 [297/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:50.849 [298/707] Linking static target lib/librte_jobstats.a 00:02:51.109 [299/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:51.109 [300/707] Linking static target lib/librte_ip_frag.a 00:02:51.109 [301/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:51.109 [302/707] Linking static target lib/librte_latencystats.a 00:02:51.370 [303/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.370 [304/707] Linking target lib/librte_jobstats.so.24.0 00:02:51.370 [305/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:51.370 [306/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:51.370 [307/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:51.370 [308/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:51.370 [309/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.370 [310/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:51.370 [311/707] Linking target lib/librte_ip_frag.so.24.0 00:02:51.370 [312/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.370 [313/707] Linking target lib/librte_latencystats.so.24.0 00:02:51.630 [314/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:51.630 [315/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:51.630 [316/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:51.630 [317/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:51.630 [318/707] Linking static target lib/librte_lpm.a 00:02:51.890 [319/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:51.890 [320/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:51.890 [321/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:51.890 [322/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:51.890 [323/707] Linking static target lib/librte_pcapng.a 00:02:51.890 [324/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.150 [325/707] Linking target lib/librte_lpm.so.24.0 00:02:52.150 [326/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:52.150 [327/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:52.150 [328/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:52.150 [329/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:52.150 [330/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.150 [331/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:52.150 [332/707] Linking target lib/librte_pcapng.so.24.0 00:02:52.411 [333/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:52.411 [334/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:52.411 [335/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.411 [336/707] Linking target lib/librte_eventdev.so.24.0 00:02:52.411 [337/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:52.671 [338/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:52.671 [339/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:52.671 [340/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:52.671 [341/707] Linking static target lib/librte_power.a 00:02:52.671 [342/707] Linking target lib/librte_dispatcher.so.24.0 00:02:52.671 [343/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:52.671 [344/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:52.671 [345/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:52.671 [346/707] Linking static target lib/librte_regexdev.a 00:02:52.671 [347/707] Linking static target lib/librte_rawdev.a 00:02:52.671 [348/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:52.671 [349/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:52.671 [350/707] Linking static target lib/librte_member.a 00:02:52.931 [351/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:52.931 [352/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:52.931 [353/707] Linking static target lib/librte_mldev.a 00:02:52.931 [354/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.192 [355/707] Linking target lib/librte_member.so.24.0 00:02:53.192 [356/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.192 [357/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:53.192 [358/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.192 [359/707] Linking target lib/librte_rawdev.so.24.0 00:02:53.192 [360/707] Linking target lib/librte_power.so.24.0 00:02:53.192 [361/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:53.192 [362/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:53.192 [363/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:53.192 [364/707] Linking static target lib/librte_reorder.a 00:02:53.192 [365/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.192 [366/707] Linking target lib/librte_regexdev.so.24.0 00:02:53.452 [367/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:53.452 [368/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:53.452 [369/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:53.452 [370/707] Linking static target lib/librte_rib.a 00:02:53.452 [371/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.452 [372/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:53.452 [373/707] Linking target lib/librte_reorder.so.24.0 00:02:53.452 [374/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:53.452 [375/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:53.452 [376/707] Linking static target lib/librte_stack.a 00:02:53.711 [377/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:53.711 [378/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:53.711 [379/707] Linking static target lib/librte_security.a 00:02:53.711 [380/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.711 [381/707] Linking target lib/librte_stack.so.24.0 00:02:53.971 [382/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.971 [383/707] Linking target lib/librte_rib.so.24.0 00:02:53.971 [384/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.971 [385/707] Linking target lib/librte_mldev.so.24.0 00:02:53.971 [386/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:53.971 [387/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:53.971 [388/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:53.971 [389/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.231 [390/707] Linking target lib/librte_security.so.24.0 00:02:54.231 [391/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:54.231 [392/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:54.231 [393/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:54.231 [394/707] Linking static target lib/librte_sched.a 00:02:54.491 [395/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:54.491 [396/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:54.491 [397/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.491 [398/707] Linking target lib/librte_sched.so.24.0 00:02:54.751 [399/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:54.751 [400/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:54.751 [401/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:54.751 [402/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:55.012 [403/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:55.012 [404/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:55.272 [405/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:55.272 [406/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:55.272 [407/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:55.272 [408/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:55.272 [409/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:55.532 [410/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:55.532 [411/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:55.532 [412/707] Linking static target lib/librte_ipsec.a 00:02:55.532 [413/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:55.532 [414/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:55.792 [415/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:55.792 [416/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.792 [417/707] Linking target lib/librte_ipsec.so.24.0 00:02:55.792 [418/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:56.051 [419/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:56.051 [420/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:56.051 [421/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:56.311 [422/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:56.311 [423/707] Linking static target lib/librte_fib.a 00:02:56.311 [424/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:56.311 [425/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:56.571 [426/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:56.571 [427/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:56.571 [428/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.571 [429/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:56.571 [430/707] Linking target lib/librte_fib.so.24.0 00:02:56.571 [431/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:56.571 [432/707] Linking static target lib/librte_pdcp.a 00:02:56.831 [433/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.831 [434/707] Linking target lib/librte_pdcp.so.24.0 00:02:57.100 [435/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:57.100 [436/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:57.100 [437/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:57.100 [438/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:57.377 [439/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:57.377 [440/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:57.377 [441/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:57.637 [442/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:57.637 [443/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:57.637 [444/707] Linking static target lib/librte_port.a 00:02:57.638 [445/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:57.638 [446/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:57.638 [447/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:57.638 [448/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:57.898 [449/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:57.898 [450/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:57.898 [451/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:57.898 [452/707] Linking static target lib/librte_pdump.a 00:02:58.158 [453/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.158 [454/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:58.158 [455/707] Linking target lib/librte_port.so.24.0 00:02:58.158 [456/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.158 [457/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:58.158 [458/707] Linking target lib/librte_pdump.so.24.0 00:02:58.418 [459/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:58.418 [460/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:58.678 [461/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:58.678 [462/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:58.678 [463/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:58.678 [464/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:58.938 [465/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:58.938 [466/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:58.938 [467/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:58.938 [468/707] Linking static target lib/librte_table.a 00:02:59.198 [469/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:59.198 [470/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:59.458 [471/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:59.458 [472/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.458 [473/707] Linking target lib/librte_table.so.24.0 00:02:59.718 [474/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:59.718 [475/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:59.718 [476/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:59.718 [477/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:59.718 [478/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:59.978 [479/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:00.239 [480/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:00.239 [481/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:00.239 [482/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:00.239 [483/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:00.499 [484/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:00.499 [485/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:00.499 [486/707] Linking static target lib/librte_graph.a 00:03:00.499 [487/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:00.499 [488/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:00.759 [489/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:00.759 [490/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:01.019 [491/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.019 [492/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:01.019 [493/707] Linking target lib/librte_graph.so.24.0 00:03:01.019 [494/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:01.278 [495/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:01.278 [496/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:01.278 [497/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:01.278 [498/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:01.538 [499/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:01.538 [500/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:01.538 [501/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:01.538 [502/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:01.798 [503/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:01.798 [504/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:01.798 [505/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:01.798 [506/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:02.058 [507/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:02.058 [508/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:02.058 [509/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:02.058 [510/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:02.058 [511/707] Linking static target lib/librte_node.a 00:03:02.058 [512/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:02.316 [513/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.316 [514/707] Linking target lib/librte_node.so.24.0 00:03:02.316 [515/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:02.316 [516/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:02.575 [517/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:02.575 [518/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:02.575 [519/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:02.576 [520/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:02.576 [521/707] Linking static target drivers/librte_bus_vdev.a 00:03:02.576 [522/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:02.576 [523/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:02.576 [524/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:02.576 [525/707] Linking static target drivers/librte_bus_pci.a 00:03:02.576 [526/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:02.836 [527/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:02.836 [528/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:02.836 [529/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:02.836 [530/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.836 [531/707] Linking target drivers/librte_bus_vdev.so.24.0 00:03:02.836 [532/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:02.836 [533/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:02.836 [534/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:03.096 [535/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:03.096 [536/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.096 [537/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:03.096 [538/707] Linking static target drivers/librte_mempool_ring.a 00:03:03.096 [539/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:03.096 [540/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:03.096 [541/707] Linking target drivers/librte_bus_pci.so.24.0 00:03:03.096 [542/707] Linking target drivers/librte_mempool_ring.so.24.0 00:03:03.356 [543/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:03.356 [544/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:03.615 [545/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:03.876 [546/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:03.876 [547/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:04.445 [548/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:04.445 [549/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:04.705 [550/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:04.705 [551/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:04.705 [552/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:04.705 [553/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:04.705 [554/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:04.967 [555/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:04.968 [556/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:05.229 [557/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:05.229 [558/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:05.229 [559/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:05.488 [560/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:05.748 [561/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:05.748 [562/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:05.748 [563/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:06.007 [564/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:06.007 [565/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:06.267 [566/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:06.267 [567/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:06.267 [568/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:06.527 [569/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:06.527 [570/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:06.527 [571/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:06.527 [572/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:06.527 [573/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:06.787 [574/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:06.787 [575/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:07.046 [576/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:07.046 [577/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:07.046 [578/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:07.046 [579/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:07.306 [580/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:07.306 [581/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:07.566 [582/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:07.566 [583/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:07.566 [584/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:07.566 [585/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:07.566 [586/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:07.566 [587/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:07.566 [588/707] Linking static target drivers/librte_net_i40e.a 00:03:07.566 [589/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:07.566 [590/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:08.135 [591/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.135 [592/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:08.135 [593/707] Linking target drivers/librte_net_i40e.so.24.0 00:03:08.135 [594/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:08.135 [595/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:08.395 [596/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:08.395 [597/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:08.655 [598/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:08.655 [599/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:08.915 [600/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:08.915 [601/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:08.915 [602/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:08.915 [603/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:08.915 [604/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:08.915 [605/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:09.176 [606/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:09.176 [607/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:09.176 [608/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:09.436 [609/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:09.436 [610/707] Linking static target lib/librte_vhost.a 00:03:09.436 [611/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:09.436 [612/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:09.436 [613/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:09.436 [614/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:09.696 [615/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:09.956 [616/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:09.956 [617/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:09.956 [618/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:10.216 [619/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.216 [620/707] Linking target lib/librte_vhost.so.24.0 00:03:10.476 [621/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:10.476 [622/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:10.736 [623/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:10.736 [624/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:10.736 [625/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:10.736 [626/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:10.736 [627/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:10.736 [628/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:10.996 [629/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:10.996 [630/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:10.996 [631/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:10.996 [632/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:11.255 [633/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:11.255 [634/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:11.255 [635/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:11.255 [636/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:11.255 [637/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:11.514 [638/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:11.514 [639/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:11.514 [640/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:11.514 [641/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:11.773 [642/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:11.773 [643/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:11.773 [644/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:12.035 [645/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:12.035 [646/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:12.035 [647/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:12.035 [648/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:12.295 [649/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:12.295 [650/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:12.295 [651/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:12.555 [652/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:12.555 [653/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:12.555 [654/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:12.816 [655/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:12.816 [656/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:12.816 [657/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:12.816 [658/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:13.075 [659/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:13.075 [660/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:13.335 [661/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:13.335 [662/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:13.335 [663/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:13.595 [664/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:13.595 [665/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:13.856 [666/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:13.856 [667/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:14.115 [668/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:14.115 [669/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:14.115 [670/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:14.115 [671/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:14.374 [672/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:14.374 [673/707] Linking static target lib/librte_pipeline.a 00:03:14.374 [674/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:14.633 [675/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:14.633 [676/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:14.892 [677/707] Linking target app/dpdk-dumpcap 00:03:15.151 [678/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:15.152 [679/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:15.152 [680/707] Linking target app/dpdk-pdump 00:03:15.152 [681/707] Linking target app/dpdk-graph 00:03:15.152 [682/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:15.152 [683/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:15.410 [684/707] Linking target app/dpdk-proc-info 00:03:15.411 [685/707] Linking target app/dpdk-test-acl 00:03:15.411 [686/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:15.411 [687/707] Linking target app/dpdk-test-bbdev 00:03:15.671 [688/707] Linking target app/dpdk-test-compress-perf 00:03:15.671 [689/707] Linking target app/dpdk-test-dma-perf 00:03:15.671 [690/707] Linking target app/dpdk-test-cmdline 00:03:15.671 [691/707] Linking target app/dpdk-test-crypto-perf 00:03:15.671 [692/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:15.930 [693/707] Linking target app/dpdk-test-fib 00:03:15.930 [694/707] Linking target app/dpdk-test-flow-perf 00:03:15.930 [695/707] Linking target app/dpdk-test-gpudev 00:03:15.930 [696/707] Linking target app/dpdk-test-eventdev 00:03:15.930 [697/707] Linking target app/dpdk-test-pipeline 00:03:15.931 [698/707] Linking target app/dpdk-test-mldev 00:03:16.190 [699/707] Linking target app/dpdk-test-regex 00:03:16.190 [700/707] Linking target app/dpdk-testpmd 00:03:16.190 [701/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:16.450 [702/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:16.709 [703/707] Linking target app/dpdk-test-sad 00:03:16.709 [704/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:17.279 [705/707] Linking target app/dpdk-test-security-perf 00:03:19.190 [706/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.190 [707/707] Linking target lib/librte_pipeline.so.24.0 00:03:19.190 03:03:22 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:19.190 03:03:22 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:19.190 03:03:22 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:19.190 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:19.190 [0/1] Installing files. 00:03:19.455 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.455 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:19.456 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:19.457 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:19.458 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:19.459 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:19.459 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.459 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.719 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.720 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.983 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.983 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.983 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.983 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:19.983 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.983 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:19.983 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.983 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:19.983 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.983 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:19.983 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.983 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.983 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.983 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.983 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.983 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.983 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.983 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.983 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.983 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.983 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.983 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.983 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.983 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.983 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.983 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.983 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.983 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.983 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.983 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.983 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.983 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.983 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.983 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.985 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:19.986 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:19.986 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:19.986 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:19.986 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:19.986 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:19.986 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:19.986 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:19.986 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:19.986 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:19.986 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:19.986 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:19.986 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:19.986 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:19.986 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:19.986 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:19.986 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:19.986 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:19.986 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:19.986 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:19.986 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:19.987 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:19.987 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:19.987 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:19.987 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:19.987 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:19.987 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:19.987 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:19.987 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:19.987 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:19.987 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:19.987 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:19.987 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:19.987 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:19.987 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:19.987 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:19.987 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:19.987 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:19.987 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:19.987 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:19.987 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:19.987 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:19.987 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:19.987 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:19.987 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:19.987 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:19.987 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:19.987 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:19.987 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:19.987 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:19.987 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:19.987 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:19.987 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:19.987 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:19.987 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:19.987 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:19.987 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:19.987 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:19.987 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:19.987 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:19.987 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:19.987 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:19.987 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:19.987 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:19.987 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:19.987 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:19.987 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:19.987 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:19.987 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:19.987 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:19.987 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:19.987 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:19.987 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:19.987 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:19.987 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:19.987 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:19.987 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:19.987 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:19.987 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:19.987 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:19.987 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:19.987 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:19.987 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:19.987 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:19.987 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:19.987 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:19.987 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:19.987 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:19.987 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:19.987 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:19.987 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:19.987 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:19.987 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:19.987 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:19.987 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:19.987 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:19.987 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:19.987 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:19.987 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:19.987 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:19.987 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:19.987 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:19.987 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:19.987 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:19.987 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:19.987 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:19.987 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:19.987 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:19.987 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:19.987 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:19.987 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:19.987 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:19.987 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:19.987 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:19.987 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:19.987 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:19.987 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:19.987 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:19.987 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:19.987 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:19.987 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:19.987 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:19.987 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:19.987 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:19.987 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:19.987 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:19.987 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:19.987 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:19.987 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:19.988 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:19.988 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:19.988 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:19.988 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:19.988 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:19.988 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:20.247 03:03:23 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:20.247 ************************************ 00:03:20.247 END TEST build_native_dpdk 00:03:20.247 ************************************ 00:03:20.247 03:03:23 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:20.247 00:03:20.247 real 0m48.616s 00:03:20.247 user 5m21.116s 00:03:20.247 sys 0m54.009s 00:03:20.247 03:03:23 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:20.247 03:03:23 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:20.247 03:03:23 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:20.247 03:03:23 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:20.247 03:03:23 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:20.247 03:03:23 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:20.247 03:03:23 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:20.247 03:03:23 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:20.247 03:03:23 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:20.247 03:03:23 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:20.247 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:20.507 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.507 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:20.507 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:20.766 Using 'verbs' RDMA provider 00:03:37.039 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:51.949 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:52.779 Creating mk/config.mk...done. 00:03:52.779 Creating mk/cc.flags.mk...done. 00:03:52.779 Type 'make' to build. 00:03:52.779 03:03:56 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:52.779 03:03:56 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:52.779 03:03:56 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:52.779 03:03:56 -- common/autotest_common.sh@10 -- $ set +x 00:03:52.779 ************************************ 00:03:52.779 START TEST make 00:03:52.779 ************************************ 00:03:52.779 03:03:56 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:53.347 make[1]: Nothing to be done for 'all'. 00:04:40.055 CC lib/log/log.o 00:04:40.055 CC lib/log/log_flags.o 00:04:40.055 CC lib/log/log_deprecated.o 00:04:40.055 CC lib/ut/ut.o 00:04:40.055 CC lib/ut_mock/mock.o 00:04:40.055 LIB libspdk_log.a 00:04:40.055 SO libspdk_log.so.7.0 00:04:40.055 LIB libspdk_ut.a 00:04:40.055 LIB libspdk_ut_mock.a 00:04:40.055 SO libspdk_ut.so.2.0 00:04:40.055 SO libspdk_ut_mock.so.6.0 00:04:40.055 SYMLINK libspdk_log.so 00:04:40.055 SYMLINK libspdk_ut.so 00:04:40.055 SYMLINK libspdk_ut_mock.so 00:04:40.055 CC lib/util/base64.o 00:04:40.055 CC lib/util/bit_array.o 00:04:40.055 CC lib/util/crc32.o 00:04:40.055 CC lib/util/crc16.o 00:04:40.055 CC lib/util/crc32c.o 00:04:40.055 CC lib/dma/dma.o 00:04:40.055 CC lib/util/cpuset.o 00:04:40.055 CXX lib/trace_parser/trace.o 00:04:40.055 CC lib/ioat/ioat.o 00:04:40.055 CC lib/vfio_user/host/vfio_user_pci.o 00:04:40.055 CC lib/util/crc32_ieee.o 00:04:40.055 CC lib/util/crc64.o 00:04:40.055 CC lib/vfio_user/host/vfio_user.o 00:04:40.055 CC lib/util/dif.o 00:04:40.055 LIB libspdk_dma.a 00:04:40.055 CC lib/util/fd.o 00:04:40.055 CC lib/util/fd_group.o 00:04:40.055 SO libspdk_dma.so.5.0 00:04:40.055 CC lib/util/file.o 00:04:40.055 CC lib/util/hexlify.o 00:04:40.055 SYMLINK libspdk_dma.so 00:04:40.055 LIB libspdk_ioat.a 00:04:40.055 CC lib/util/iov.o 00:04:40.055 SO libspdk_ioat.so.7.0 00:04:40.055 CC lib/util/math.o 00:04:40.055 CC lib/util/net.o 00:04:40.055 SYMLINK libspdk_ioat.so 00:04:40.055 CC lib/util/pipe.o 00:04:40.055 CC lib/util/strerror_tls.o 00:04:40.055 LIB libspdk_vfio_user.a 00:04:40.055 CC lib/util/string.o 00:04:40.055 SO libspdk_vfio_user.so.5.0 00:04:40.055 CC lib/util/uuid.o 00:04:40.055 CC lib/util/xor.o 00:04:40.055 SYMLINK libspdk_vfio_user.so 00:04:40.055 CC lib/util/zipf.o 00:04:40.055 CC lib/util/md5.o 00:04:40.055 LIB libspdk_util.a 00:04:40.055 SO libspdk_util.so.10.0 00:04:40.055 LIB libspdk_trace_parser.a 00:04:40.055 SYMLINK libspdk_util.so 00:04:40.055 SO libspdk_trace_parser.so.6.0 00:04:40.055 SYMLINK libspdk_trace_parser.so 00:04:40.055 CC lib/json/json_parse.o 00:04:40.055 CC lib/vmd/vmd.o 00:04:40.055 CC lib/json/json_util.o 00:04:40.055 CC lib/json/json_write.o 00:04:40.055 CC lib/conf/conf.o 00:04:40.055 CC lib/vmd/led.o 00:04:40.055 CC lib/env_dpdk/env.o 00:04:40.055 CC lib/rdma_provider/common.o 00:04:40.055 CC lib/rdma_utils/rdma_utils.o 00:04:40.055 CC lib/idxd/idxd.o 00:04:40.055 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:40.055 CC lib/idxd/idxd_user.o 00:04:40.055 LIB libspdk_conf.a 00:04:40.055 CC lib/idxd/idxd_kernel.o 00:04:40.055 CC lib/env_dpdk/memory.o 00:04:40.055 SO libspdk_conf.so.6.0 00:04:40.055 LIB libspdk_rdma_utils.a 00:04:40.055 LIB libspdk_json.a 00:04:40.055 SO libspdk_rdma_utils.so.1.0 00:04:40.055 SO libspdk_json.so.6.0 00:04:40.055 SYMLINK libspdk_conf.so 00:04:40.055 CC lib/env_dpdk/pci.o 00:04:40.055 SYMLINK libspdk_rdma_utils.so 00:04:40.055 LIB libspdk_rdma_provider.a 00:04:40.055 CC lib/env_dpdk/init.o 00:04:40.055 SYMLINK libspdk_json.so 00:04:40.055 CC lib/env_dpdk/threads.o 00:04:40.055 SO libspdk_rdma_provider.so.6.0 00:04:40.055 SYMLINK libspdk_rdma_provider.so 00:04:40.055 CC lib/env_dpdk/pci_ioat.o 00:04:40.055 CC lib/env_dpdk/pci_virtio.o 00:04:40.055 CC lib/env_dpdk/pci_vmd.o 00:04:40.055 CC lib/jsonrpc/jsonrpc_server.o 00:04:40.055 CC lib/env_dpdk/pci_idxd.o 00:04:40.055 CC lib/env_dpdk/pci_event.o 00:04:40.055 CC lib/env_dpdk/sigbus_handler.o 00:04:40.055 CC lib/env_dpdk/pci_dpdk.o 00:04:40.055 LIB libspdk_vmd.a 00:04:40.055 LIB libspdk_idxd.a 00:04:40.055 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:40.055 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:40.055 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:40.055 SO libspdk_vmd.so.6.0 00:04:40.055 SO libspdk_idxd.so.12.1 00:04:40.055 CC lib/jsonrpc/jsonrpc_client.o 00:04:40.055 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:40.055 SYMLINK libspdk_vmd.so 00:04:40.055 SYMLINK libspdk_idxd.so 00:04:40.055 LIB libspdk_jsonrpc.a 00:04:40.055 SO libspdk_jsonrpc.so.6.0 00:04:40.055 SYMLINK libspdk_jsonrpc.so 00:04:40.313 CC lib/rpc/rpc.o 00:04:40.313 LIB libspdk_env_dpdk.a 00:04:40.572 SO libspdk_env_dpdk.so.15.0 00:04:40.572 LIB libspdk_rpc.a 00:04:40.572 SO libspdk_rpc.so.6.0 00:04:40.572 SYMLINK libspdk_env_dpdk.so 00:04:40.572 SYMLINK libspdk_rpc.so 00:04:41.139 CC lib/notify/notify.o 00:04:41.139 CC lib/notify/notify_rpc.o 00:04:41.139 CC lib/trace/trace_rpc.o 00:04:41.139 CC lib/trace/trace.o 00:04:41.139 CC lib/trace/trace_flags.o 00:04:41.139 CC lib/keyring/keyring.o 00:04:41.139 CC lib/keyring/keyring_rpc.o 00:04:41.139 LIB libspdk_notify.a 00:04:41.139 SO libspdk_notify.so.6.0 00:04:41.398 LIB libspdk_keyring.a 00:04:41.398 SYMLINK libspdk_notify.so 00:04:41.398 LIB libspdk_trace.a 00:04:41.398 SO libspdk_keyring.so.2.0 00:04:41.398 SO libspdk_trace.so.11.0 00:04:41.398 SYMLINK libspdk_keyring.so 00:04:41.398 SYMLINK libspdk_trace.so 00:04:41.969 CC lib/thread/thread.o 00:04:41.969 CC lib/thread/iobuf.o 00:04:41.969 CC lib/sock/sock.o 00:04:41.969 CC lib/sock/sock_rpc.o 00:04:42.229 LIB libspdk_sock.a 00:04:42.229 SO libspdk_sock.so.10.0 00:04:42.488 SYMLINK libspdk_sock.so 00:04:42.748 CC lib/nvme/nvme_fabric.o 00:04:42.748 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:42.748 CC lib/nvme/nvme_ctrlr.o 00:04:42.748 CC lib/nvme/nvme_pcie_common.o 00:04:42.748 CC lib/nvme/nvme_ns.o 00:04:42.748 CC lib/nvme/nvme_ns_cmd.o 00:04:42.748 CC lib/nvme/nvme_pcie.o 00:04:42.748 CC lib/nvme/nvme.o 00:04:42.748 CC lib/nvme/nvme_qpair.o 00:04:43.689 CC lib/nvme/nvme_quirks.o 00:04:43.689 CC lib/nvme/nvme_transport.o 00:04:43.689 LIB libspdk_thread.a 00:04:43.689 CC lib/nvme/nvme_discovery.o 00:04:43.689 SO libspdk_thread.so.10.1 00:04:43.689 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:43.689 SYMLINK libspdk_thread.so 00:04:43.689 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:43.689 CC lib/nvme/nvme_tcp.o 00:04:43.689 CC lib/nvme/nvme_opal.o 00:04:43.949 CC lib/nvme/nvme_io_msg.o 00:04:43.949 CC lib/nvme/nvme_poll_group.o 00:04:43.949 CC lib/nvme/nvme_zns.o 00:04:44.209 CC lib/nvme/nvme_stubs.o 00:04:44.209 CC lib/nvme/nvme_auth.o 00:04:44.209 CC lib/nvme/nvme_cuse.o 00:04:44.209 CC lib/nvme/nvme_rdma.o 00:04:44.469 CC lib/accel/accel.o 00:04:44.469 CC lib/accel/accel_rpc.o 00:04:44.469 CC lib/accel/accel_sw.o 00:04:44.729 CC lib/blob/blobstore.o 00:04:44.729 CC lib/init/json_config.o 00:04:44.729 CC lib/virtio/virtio.o 00:04:44.989 CC lib/init/subsystem.o 00:04:44.989 CC lib/fsdev/fsdev.o 00:04:44.989 CC lib/init/subsystem_rpc.o 00:04:44.989 CC lib/init/rpc.o 00:04:44.989 CC lib/virtio/virtio_vhost_user.o 00:04:44.989 CC lib/virtio/virtio_vfio_user.o 00:04:44.989 CC lib/virtio/virtio_pci.o 00:04:45.248 CC lib/blob/request.o 00:04:45.248 LIB libspdk_init.a 00:04:45.248 SO libspdk_init.so.6.0 00:04:45.248 CC lib/blob/zeroes.o 00:04:45.248 SYMLINK libspdk_init.so 00:04:45.248 CC lib/blob/blob_bs_dev.o 00:04:45.248 CC lib/fsdev/fsdev_io.o 00:04:45.505 LIB libspdk_virtio.a 00:04:45.505 SO libspdk_virtio.so.7.0 00:04:45.505 CC lib/fsdev/fsdev_rpc.o 00:04:45.505 SYMLINK libspdk_virtio.so 00:04:45.505 LIB libspdk_accel.a 00:04:45.505 CC lib/event/app.o 00:04:45.505 CC lib/event/reactor.o 00:04:45.505 CC lib/event/log_rpc.o 00:04:45.505 CC lib/event/app_rpc.o 00:04:45.772 SO libspdk_accel.so.16.0 00:04:45.773 CC lib/event/scheduler_static.o 00:04:45.773 LIB libspdk_nvme.a 00:04:45.773 SYMLINK libspdk_accel.so 00:04:45.773 LIB libspdk_fsdev.a 00:04:45.773 SO libspdk_fsdev.so.1.0 00:04:45.773 SO libspdk_nvme.so.14.0 00:04:45.773 SYMLINK libspdk_fsdev.so 00:04:46.035 CC lib/bdev/bdev_rpc.o 00:04:46.035 CC lib/bdev/bdev.o 00:04:46.035 CC lib/bdev/bdev_zone.o 00:04:46.035 CC lib/bdev/scsi_nvme.o 00:04:46.035 CC lib/bdev/part.o 00:04:46.035 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:46.035 SYMLINK libspdk_nvme.so 00:04:46.035 LIB libspdk_event.a 00:04:46.292 SO libspdk_event.so.14.0 00:04:46.292 SYMLINK libspdk_event.so 00:04:46.857 LIB libspdk_fuse_dispatcher.a 00:04:46.857 SO libspdk_fuse_dispatcher.so.1.0 00:04:46.857 SYMLINK libspdk_fuse_dispatcher.so 00:04:48.231 LIB libspdk_blob.a 00:04:48.231 SO libspdk_blob.so.11.0 00:04:48.231 SYMLINK libspdk_blob.so 00:04:48.798 CC lib/lvol/lvol.o 00:04:48.798 LIB libspdk_bdev.a 00:04:48.798 CC lib/blobfs/blobfs.o 00:04:48.798 CC lib/blobfs/tree.o 00:04:48.798 SO libspdk_bdev.so.16.0 00:04:49.057 SYMLINK libspdk_bdev.so 00:04:49.317 CC lib/scsi/dev.o 00:04:49.317 CC lib/scsi/lun.o 00:04:49.317 CC lib/scsi/scsi.o 00:04:49.317 CC lib/nvmf/ctrlr.o 00:04:49.317 CC lib/nbd/nbd.o 00:04:49.317 CC lib/scsi/port.o 00:04:49.317 CC lib/ublk/ublk.o 00:04:49.317 CC lib/ftl/ftl_core.o 00:04:49.317 CC lib/ftl/ftl_init.o 00:04:49.317 CC lib/ublk/ublk_rpc.o 00:04:49.576 CC lib/ftl/ftl_layout.o 00:04:49.576 CC lib/scsi/scsi_bdev.o 00:04:49.576 CC lib/ftl/ftl_debug.o 00:04:49.576 CC lib/ftl/ftl_io.o 00:04:49.576 CC lib/nvmf/ctrlr_discovery.o 00:04:49.576 CC lib/nbd/nbd_rpc.o 00:04:49.576 LIB libspdk_blobfs.a 00:04:49.836 SO libspdk_blobfs.so.10.0 00:04:49.836 SYMLINK libspdk_blobfs.so 00:04:49.836 CC lib/nvmf/ctrlr_bdev.o 00:04:49.836 LIB libspdk_lvol.a 00:04:49.836 CC lib/ftl/ftl_sb.o 00:04:49.836 SO libspdk_lvol.so.10.0 00:04:49.836 CC lib/ftl/ftl_l2p.o 00:04:49.836 LIB libspdk_nbd.a 00:04:49.836 CC lib/ftl/ftl_l2p_flat.o 00:04:49.836 SO libspdk_nbd.so.7.0 00:04:49.836 SYMLINK libspdk_lvol.so 00:04:49.836 CC lib/ftl/ftl_nv_cache.o 00:04:50.095 LIB libspdk_ublk.a 00:04:50.095 SYMLINK libspdk_nbd.so 00:04:50.095 CC lib/scsi/scsi_pr.o 00:04:50.095 SO libspdk_ublk.so.3.0 00:04:50.095 CC lib/ftl/ftl_band.o 00:04:50.095 SYMLINK libspdk_ublk.so 00:04:50.095 CC lib/ftl/ftl_band_ops.o 00:04:50.095 CC lib/nvmf/subsystem.o 00:04:50.095 CC lib/nvmf/nvmf.o 00:04:50.095 CC lib/nvmf/nvmf_rpc.o 00:04:50.095 CC lib/nvmf/transport.o 00:04:50.355 CC lib/scsi/scsi_rpc.o 00:04:50.355 CC lib/ftl/ftl_writer.o 00:04:50.615 CC lib/nvmf/tcp.o 00:04:50.615 CC lib/scsi/task.o 00:04:50.615 CC lib/nvmf/stubs.o 00:04:50.615 CC lib/nvmf/mdns_server.o 00:04:50.874 LIB libspdk_scsi.a 00:04:50.874 SO libspdk_scsi.so.9.0 00:04:51.133 CC lib/nvmf/rdma.o 00:04:51.133 SYMLINK libspdk_scsi.so 00:04:51.133 CC lib/nvmf/auth.o 00:04:51.133 CC lib/ftl/ftl_rq.o 00:04:51.133 CC lib/ftl/ftl_reloc.o 00:04:51.133 CC lib/ftl/ftl_l2p_cache.o 00:04:51.392 CC lib/ftl/ftl_p2l.o 00:04:51.392 CC lib/ftl/ftl_p2l_log.o 00:04:51.392 CC lib/vhost/vhost.o 00:04:51.392 CC lib/iscsi/conn.o 00:04:51.652 CC lib/ftl/mngt/ftl_mngt.o 00:04:51.652 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:51.652 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:51.652 CC lib/vhost/vhost_rpc.o 00:04:51.911 CC lib/iscsi/init_grp.o 00:04:51.911 CC lib/iscsi/iscsi.o 00:04:51.911 CC lib/iscsi/param.o 00:04:51.911 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:51.911 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:52.171 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:52.171 CC lib/iscsi/portal_grp.o 00:04:52.171 CC lib/iscsi/tgt_node.o 00:04:52.171 CC lib/iscsi/iscsi_subsystem.o 00:04:52.171 CC lib/iscsi/iscsi_rpc.o 00:04:52.430 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:52.430 CC lib/iscsi/task.o 00:04:52.430 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:52.430 CC lib/vhost/vhost_scsi.o 00:04:52.430 CC lib/vhost/vhost_blk.o 00:04:52.430 CC lib/vhost/rte_vhost_user.o 00:04:52.690 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:52.690 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:52.690 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:52.690 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:52.690 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:52.690 CC lib/ftl/utils/ftl_conf.o 00:04:52.690 CC lib/ftl/utils/ftl_md.o 00:04:52.949 CC lib/ftl/utils/ftl_mempool.o 00:04:52.949 CC lib/ftl/utils/ftl_bitmap.o 00:04:52.949 CC lib/ftl/utils/ftl_property.o 00:04:53.214 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:53.214 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:53.214 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:53.214 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:53.214 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:53.214 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:53.484 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:53.484 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:53.484 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:53.484 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:53.484 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:53.484 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:53.484 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:53.484 LIB libspdk_iscsi.a 00:04:53.484 CC lib/ftl/base/ftl_base_dev.o 00:04:53.484 CC lib/ftl/base/ftl_base_bdev.o 00:04:53.743 SO libspdk_iscsi.so.8.0 00:04:53.743 CC lib/ftl/ftl_trace.o 00:04:53.743 LIB libspdk_vhost.a 00:04:53.743 LIB libspdk_nvmf.a 00:04:53.743 SO libspdk_vhost.so.8.0 00:04:53.743 SYMLINK libspdk_iscsi.so 00:04:53.743 SO libspdk_nvmf.so.19.0 00:04:53.743 SYMLINK libspdk_vhost.so 00:04:54.003 LIB libspdk_ftl.a 00:04:54.003 SYMLINK libspdk_nvmf.so 00:04:54.264 SO libspdk_ftl.so.9.0 00:04:54.524 SYMLINK libspdk_ftl.so 00:04:54.784 CC module/env_dpdk/env_dpdk_rpc.o 00:04:54.784 CC module/fsdev/aio/fsdev_aio.o 00:04:54.784 CC module/sock/posix/posix.o 00:04:54.784 CC module/keyring/file/keyring.o 00:04:54.784 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:54.784 CC module/accel/ioat/accel_ioat.o 00:04:54.784 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:54.784 CC module/blob/bdev/blob_bdev.o 00:04:54.784 CC module/accel/error/accel_error.o 00:04:54.784 CC module/scheduler/gscheduler/gscheduler.o 00:04:55.043 LIB libspdk_env_dpdk_rpc.a 00:04:55.043 SO libspdk_env_dpdk_rpc.so.6.0 00:04:55.043 SYMLINK libspdk_env_dpdk_rpc.so 00:04:55.043 CC module/keyring/file/keyring_rpc.o 00:04:55.043 LIB libspdk_scheduler_dpdk_governor.a 00:04:55.043 LIB libspdk_scheduler_gscheduler.a 00:04:55.043 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:55.043 CC module/accel/error/accel_error_rpc.o 00:04:55.043 SO libspdk_scheduler_gscheduler.so.4.0 00:04:55.043 CC module/accel/ioat/accel_ioat_rpc.o 00:04:55.043 LIB libspdk_scheduler_dynamic.a 00:04:55.043 SO libspdk_scheduler_dynamic.so.4.0 00:04:55.043 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:55.043 SYMLINK libspdk_scheduler_gscheduler.so 00:04:55.043 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:55.043 CC module/fsdev/aio/linux_aio_mgr.o 00:04:55.043 SYMLINK libspdk_scheduler_dynamic.so 00:04:55.043 LIB libspdk_keyring_file.a 00:04:55.043 CC module/keyring/linux/keyring.o 00:04:55.043 LIB libspdk_blob_bdev.a 00:04:55.302 SO libspdk_blob_bdev.so.11.0 00:04:55.302 SO libspdk_keyring_file.so.2.0 00:04:55.302 LIB libspdk_accel_ioat.a 00:04:55.302 LIB libspdk_accel_error.a 00:04:55.302 SO libspdk_accel_ioat.so.6.0 00:04:55.302 SO libspdk_accel_error.so.2.0 00:04:55.302 SYMLINK libspdk_keyring_file.so 00:04:55.302 SYMLINK libspdk_blob_bdev.so 00:04:55.302 CC module/keyring/linux/keyring_rpc.o 00:04:55.302 SYMLINK libspdk_accel_ioat.so 00:04:55.302 CC module/accel/dsa/accel_dsa.o 00:04:55.302 SYMLINK libspdk_accel_error.so 00:04:55.302 CC module/accel/dsa/accel_dsa_rpc.o 00:04:55.302 LIB libspdk_keyring_linux.a 00:04:55.560 CC module/accel/iaa/accel_iaa.o 00:04:55.560 SO libspdk_keyring_linux.so.1.0 00:04:55.560 CC module/accel/iaa/accel_iaa_rpc.o 00:04:55.560 SYMLINK libspdk_keyring_linux.so 00:04:55.560 CC module/blobfs/bdev/blobfs_bdev.o 00:04:55.560 CC module/bdev/gpt/gpt.o 00:04:55.560 CC module/bdev/error/vbdev_error.o 00:04:55.560 CC module/bdev/delay/vbdev_delay.o 00:04:55.560 LIB libspdk_accel_dsa.a 00:04:55.560 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:55.560 LIB libspdk_fsdev_aio.a 00:04:55.560 SO libspdk_accel_dsa.so.5.0 00:04:55.560 CC module/bdev/lvol/vbdev_lvol.o 00:04:55.560 LIB libspdk_accel_iaa.a 00:04:55.818 SO libspdk_fsdev_aio.so.1.0 00:04:55.818 SO libspdk_accel_iaa.so.3.0 00:04:55.818 SYMLINK libspdk_accel_dsa.so 00:04:55.818 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:55.818 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:55.818 SYMLINK libspdk_accel_iaa.so 00:04:55.818 SYMLINK libspdk_fsdev_aio.so 00:04:55.818 CC module/bdev/gpt/vbdev_gpt.o 00:04:55.818 CC module/bdev/error/vbdev_error_rpc.o 00:04:55.818 LIB libspdk_sock_posix.a 00:04:55.818 SO libspdk_sock_posix.so.6.0 00:04:55.818 SYMLINK libspdk_sock_posix.so 00:04:55.818 CC module/bdev/malloc/bdev_malloc.o 00:04:56.076 LIB libspdk_blobfs_bdev.a 00:04:56.076 CC module/bdev/null/bdev_null.o 00:04:56.076 SO libspdk_blobfs_bdev.so.6.0 00:04:56.076 LIB libspdk_bdev_error.a 00:04:56.076 LIB libspdk_bdev_delay.a 00:04:56.076 SO libspdk_bdev_error.so.6.0 00:04:56.076 SO libspdk_bdev_delay.so.6.0 00:04:56.076 SYMLINK libspdk_blobfs_bdev.so 00:04:56.076 CC module/bdev/null/bdev_null_rpc.o 00:04:56.076 CC module/bdev/nvme/bdev_nvme.o 00:04:56.076 SYMLINK libspdk_bdev_error.so 00:04:56.076 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:56.076 CC module/bdev/passthru/vbdev_passthru.o 00:04:56.076 LIB libspdk_bdev_gpt.a 00:04:56.076 SYMLINK libspdk_bdev_delay.so 00:04:56.076 CC module/bdev/nvme/nvme_rpc.o 00:04:56.076 SO libspdk_bdev_gpt.so.6.0 00:04:56.076 SYMLINK libspdk_bdev_gpt.so 00:04:56.334 LIB libspdk_bdev_lvol.a 00:04:56.334 LIB libspdk_bdev_null.a 00:04:56.334 SO libspdk_bdev_lvol.so.6.0 00:04:56.334 SO libspdk_bdev_null.so.6.0 00:04:56.334 CC module/bdev/raid/bdev_raid.o 00:04:56.334 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:56.334 CC module/bdev/split/vbdev_split.o 00:04:56.334 SYMLINK libspdk_bdev_null.so 00:04:56.334 CC module/bdev/raid/bdev_raid_rpc.o 00:04:56.334 SYMLINK libspdk_bdev_lvol.so 00:04:56.334 CC module/bdev/raid/bdev_raid_sb.o 00:04:56.334 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:56.334 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:56.334 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:56.593 LIB libspdk_bdev_malloc.a 00:04:56.593 SO libspdk_bdev_malloc.so.6.0 00:04:56.593 LIB libspdk_bdev_passthru.a 00:04:56.593 SO libspdk_bdev_passthru.so.6.0 00:04:56.593 CC module/bdev/split/vbdev_split_rpc.o 00:04:56.593 CC module/bdev/raid/raid0.o 00:04:56.593 SYMLINK libspdk_bdev_malloc.so 00:04:56.593 SYMLINK libspdk_bdev_passthru.so 00:04:56.593 CC module/bdev/raid/raid1.o 00:04:56.853 CC module/bdev/aio/bdev_aio.o 00:04:56.853 CC module/bdev/ftl/bdev_ftl.o 00:04:56.853 LIB libspdk_bdev_split.a 00:04:56.853 LIB libspdk_bdev_zone_block.a 00:04:56.853 SO libspdk_bdev_split.so.6.0 00:04:56.853 CC module/bdev/iscsi/bdev_iscsi.o 00:04:56.853 SO libspdk_bdev_zone_block.so.6.0 00:04:56.853 CC module/bdev/nvme/bdev_mdns_client.o 00:04:56.853 SYMLINK libspdk_bdev_zone_block.so 00:04:56.853 CC module/bdev/aio/bdev_aio_rpc.o 00:04:56.853 SYMLINK libspdk_bdev_split.so 00:04:56.853 CC module/bdev/nvme/vbdev_opal.o 00:04:56.853 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:57.112 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:57.112 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:57.112 CC module/bdev/raid/concat.o 00:04:57.112 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:57.112 CC module/bdev/raid/raid5f.o 00:04:57.112 LIB libspdk_bdev_aio.a 00:04:57.112 SO libspdk_bdev_aio.so.6.0 00:04:57.112 LIB libspdk_bdev_iscsi.a 00:04:57.112 SYMLINK libspdk_bdev_aio.so 00:04:57.112 SO libspdk_bdev_iscsi.so.6.0 00:04:57.371 LIB libspdk_bdev_ftl.a 00:04:57.371 SYMLINK libspdk_bdev_iscsi.so 00:04:57.371 SO libspdk_bdev_ftl.so.6.0 00:04:57.371 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:57.371 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:57.371 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:57.371 SYMLINK libspdk_bdev_ftl.so 00:04:57.630 LIB libspdk_bdev_raid.a 00:04:57.630 SO libspdk_bdev_raid.so.6.0 00:04:57.888 SYMLINK libspdk_bdev_raid.so 00:04:57.888 LIB libspdk_bdev_virtio.a 00:04:58.146 SO libspdk_bdev_virtio.so.6.0 00:04:58.146 SYMLINK libspdk_bdev_virtio.so 00:04:58.714 LIB libspdk_bdev_nvme.a 00:04:58.714 SO libspdk_bdev_nvme.so.7.0 00:04:58.971 SYMLINK libspdk_bdev_nvme.so 00:04:59.538 CC module/event/subsystems/keyring/keyring.o 00:04:59.538 CC module/event/subsystems/vmd/vmd.o 00:04:59.538 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:59.538 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:59.538 CC module/event/subsystems/fsdev/fsdev.o 00:04:59.538 CC module/event/subsystems/scheduler/scheduler.o 00:04:59.538 CC module/event/subsystems/sock/sock.o 00:04:59.538 CC module/event/subsystems/iobuf/iobuf.o 00:04:59.538 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:59.538 LIB libspdk_event_vhost_blk.a 00:04:59.538 LIB libspdk_event_fsdev.a 00:04:59.538 LIB libspdk_event_vmd.a 00:04:59.538 LIB libspdk_event_keyring.a 00:04:59.539 LIB libspdk_event_scheduler.a 00:04:59.539 SO libspdk_event_vhost_blk.so.3.0 00:04:59.539 LIB libspdk_event_sock.a 00:04:59.539 SO libspdk_event_fsdev.so.1.0 00:04:59.539 SO libspdk_event_keyring.so.1.0 00:04:59.798 SO libspdk_event_vmd.so.6.0 00:04:59.798 SO libspdk_event_scheduler.so.4.0 00:04:59.798 LIB libspdk_event_iobuf.a 00:04:59.798 SO libspdk_event_sock.so.5.0 00:04:59.798 SO libspdk_event_iobuf.so.3.0 00:04:59.798 SYMLINK libspdk_event_vhost_blk.so 00:04:59.798 SYMLINK libspdk_event_fsdev.so 00:04:59.798 SYMLINK libspdk_event_vmd.so 00:04:59.798 SYMLINK libspdk_event_scheduler.so 00:04:59.798 SYMLINK libspdk_event_keyring.so 00:04:59.798 SYMLINK libspdk_event_sock.so 00:04:59.798 SYMLINK libspdk_event_iobuf.so 00:05:00.057 CC module/event/subsystems/accel/accel.o 00:05:00.316 LIB libspdk_event_accel.a 00:05:00.316 SO libspdk_event_accel.so.6.0 00:05:00.316 SYMLINK libspdk_event_accel.so 00:05:00.885 CC module/event/subsystems/bdev/bdev.o 00:05:00.885 LIB libspdk_event_bdev.a 00:05:01.145 SO libspdk_event_bdev.so.6.0 00:05:01.145 SYMLINK libspdk_event_bdev.so 00:05:01.404 CC module/event/subsystems/ublk/ublk.o 00:05:01.404 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:01.404 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:01.404 CC module/event/subsystems/scsi/scsi.o 00:05:01.404 CC module/event/subsystems/nbd/nbd.o 00:05:01.664 LIB libspdk_event_ublk.a 00:05:01.664 LIB libspdk_event_nbd.a 00:05:01.664 LIB libspdk_event_scsi.a 00:05:01.664 SO libspdk_event_nbd.so.6.0 00:05:01.664 SO libspdk_event_ublk.so.3.0 00:05:01.664 SO libspdk_event_scsi.so.6.0 00:05:01.664 LIB libspdk_event_nvmf.a 00:05:01.664 SYMLINK libspdk_event_nbd.so 00:05:01.664 SYMLINK libspdk_event_ublk.so 00:05:01.664 SYMLINK libspdk_event_scsi.so 00:05:01.664 SO libspdk_event_nvmf.so.6.0 00:05:01.664 SYMLINK libspdk_event_nvmf.so 00:05:02.233 CC module/event/subsystems/iscsi/iscsi.o 00:05:02.233 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:02.233 LIB libspdk_event_vhost_scsi.a 00:05:02.233 SO libspdk_event_vhost_scsi.so.3.0 00:05:02.233 LIB libspdk_event_iscsi.a 00:05:02.233 SO libspdk_event_iscsi.so.6.0 00:05:02.233 SYMLINK libspdk_event_vhost_scsi.so 00:05:02.492 SYMLINK libspdk_event_iscsi.so 00:05:02.492 SO libspdk.so.6.0 00:05:02.492 SYMLINK libspdk.so 00:05:02.751 CC test/rpc_client/rpc_client_test.o 00:05:02.751 TEST_HEADER include/spdk/accel.h 00:05:02.751 CXX app/trace/trace.o 00:05:02.751 TEST_HEADER include/spdk/accel_module.h 00:05:02.751 TEST_HEADER include/spdk/assert.h 00:05:02.751 TEST_HEADER include/spdk/barrier.h 00:05:02.751 TEST_HEADER include/spdk/base64.h 00:05:02.751 CC app/trace_record/trace_record.o 00:05:03.010 TEST_HEADER include/spdk/bdev.h 00:05:03.010 TEST_HEADER include/spdk/bdev_module.h 00:05:03.010 TEST_HEADER include/spdk/bdev_zone.h 00:05:03.010 TEST_HEADER include/spdk/bit_array.h 00:05:03.010 TEST_HEADER include/spdk/bit_pool.h 00:05:03.010 TEST_HEADER include/spdk/blob_bdev.h 00:05:03.010 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:03.010 TEST_HEADER include/spdk/blobfs.h 00:05:03.010 TEST_HEADER include/spdk/blob.h 00:05:03.010 TEST_HEADER include/spdk/conf.h 00:05:03.010 TEST_HEADER include/spdk/config.h 00:05:03.011 TEST_HEADER include/spdk/cpuset.h 00:05:03.011 TEST_HEADER include/spdk/crc16.h 00:05:03.011 TEST_HEADER include/spdk/crc32.h 00:05:03.011 TEST_HEADER include/spdk/crc64.h 00:05:03.011 TEST_HEADER include/spdk/dif.h 00:05:03.011 TEST_HEADER include/spdk/dma.h 00:05:03.011 TEST_HEADER include/spdk/endian.h 00:05:03.011 TEST_HEADER include/spdk/env_dpdk.h 00:05:03.011 TEST_HEADER include/spdk/env.h 00:05:03.011 TEST_HEADER include/spdk/event.h 00:05:03.011 TEST_HEADER include/spdk/fd_group.h 00:05:03.011 TEST_HEADER include/spdk/fd.h 00:05:03.011 CC app/nvmf_tgt/nvmf_main.o 00:05:03.011 TEST_HEADER include/spdk/file.h 00:05:03.011 TEST_HEADER include/spdk/fsdev.h 00:05:03.011 TEST_HEADER include/spdk/fsdev_module.h 00:05:03.011 TEST_HEADER include/spdk/ftl.h 00:05:03.011 CC test/thread/poller_perf/poller_perf.o 00:05:03.011 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:03.011 TEST_HEADER include/spdk/gpt_spec.h 00:05:03.011 TEST_HEADER include/spdk/hexlify.h 00:05:03.011 TEST_HEADER include/spdk/histogram_data.h 00:05:03.011 TEST_HEADER include/spdk/idxd.h 00:05:03.011 TEST_HEADER include/spdk/idxd_spec.h 00:05:03.011 TEST_HEADER include/spdk/init.h 00:05:03.011 TEST_HEADER include/spdk/ioat.h 00:05:03.011 TEST_HEADER include/spdk/ioat_spec.h 00:05:03.011 TEST_HEADER include/spdk/iscsi_spec.h 00:05:03.011 CC examples/util/zipf/zipf.o 00:05:03.011 TEST_HEADER include/spdk/json.h 00:05:03.011 TEST_HEADER include/spdk/jsonrpc.h 00:05:03.011 TEST_HEADER include/spdk/keyring.h 00:05:03.011 TEST_HEADER include/spdk/keyring_module.h 00:05:03.011 TEST_HEADER include/spdk/likely.h 00:05:03.011 TEST_HEADER include/spdk/log.h 00:05:03.011 TEST_HEADER include/spdk/lvol.h 00:05:03.011 TEST_HEADER include/spdk/md5.h 00:05:03.011 TEST_HEADER include/spdk/memory.h 00:05:03.011 TEST_HEADER include/spdk/mmio.h 00:05:03.011 TEST_HEADER include/spdk/nbd.h 00:05:03.011 TEST_HEADER include/spdk/net.h 00:05:03.011 TEST_HEADER include/spdk/notify.h 00:05:03.011 TEST_HEADER include/spdk/nvme.h 00:05:03.011 CC test/dma/test_dma/test_dma.o 00:05:03.011 TEST_HEADER include/spdk/nvme_intel.h 00:05:03.011 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:03.011 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:03.011 TEST_HEADER include/spdk/nvme_spec.h 00:05:03.011 CC test/app/bdev_svc/bdev_svc.o 00:05:03.011 TEST_HEADER include/spdk/nvme_zns.h 00:05:03.011 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:03.011 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:03.011 TEST_HEADER include/spdk/nvmf.h 00:05:03.011 TEST_HEADER include/spdk/nvmf_spec.h 00:05:03.011 TEST_HEADER include/spdk/nvmf_transport.h 00:05:03.011 TEST_HEADER include/spdk/opal.h 00:05:03.011 TEST_HEADER include/spdk/opal_spec.h 00:05:03.011 TEST_HEADER include/spdk/pci_ids.h 00:05:03.011 TEST_HEADER include/spdk/pipe.h 00:05:03.011 TEST_HEADER include/spdk/queue.h 00:05:03.011 TEST_HEADER include/spdk/reduce.h 00:05:03.011 TEST_HEADER include/spdk/rpc.h 00:05:03.011 TEST_HEADER include/spdk/scheduler.h 00:05:03.011 TEST_HEADER include/spdk/scsi.h 00:05:03.011 TEST_HEADER include/spdk/scsi_spec.h 00:05:03.011 TEST_HEADER include/spdk/sock.h 00:05:03.011 TEST_HEADER include/spdk/stdinc.h 00:05:03.011 CC test/env/mem_callbacks/mem_callbacks.o 00:05:03.011 TEST_HEADER include/spdk/string.h 00:05:03.011 TEST_HEADER include/spdk/thread.h 00:05:03.011 TEST_HEADER include/spdk/trace.h 00:05:03.011 TEST_HEADER include/spdk/trace_parser.h 00:05:03.011 TEST_HEADER include/spdk/tree.h 00:05:03.011 TEST_HEADER include/spdk/ublk.h 00:05:03.011 TEST_HEADER include/spdk/util.h 00:05:03.011 LINK rpc_client_test 00:05:03.011 TEST_HEADER include/spdk/uuid.h 00:05:03.011 TEST_HEADER include/spdk/version.h 00:05:03.011 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:03.011 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:03.011 TEST_HEADER include/spdk/vhost.h 00:05:03.011 TEST_HEADER include/spdk/vmd.h 00:05:03.011 TEST_HEADER include/spdk/xor.h 00:05:03.011 TEST_HEADER include/spdk/zipf.h 00:05:03.011 CXX test/cpp_headers/accel.o 00:05:03.276 LINK zipf 00:05:03.276 LINK poller_perf 00:05:03.276 LINK nvmf_tgt 00:05:03.276 LINK spdk_trace_record 00:05:03.276 LINK bdev_svc 00:05:03.276 LINK spdk_trace 00:05:03.276 CXX test/cpp_headers/accel_module.o 00:05:03.536 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:03.536 CC test/app/histogram_perf/histogram_perf.o 00:05:03.536 CC test/env/vtophys/vtophys.o 00:05:03.536 CXX test/cpp_headers/assert.o 00:05:03.536 CC examples/ioat/perf/perf.o 00:05:03.536 CC examples/vmd/lsvmd/lsvmd.o 00:05:03.536 LINK test_dma 00:05:03.536 LINK histogram_perf 00:05:03.536 CC app/iscsi_tgt/iscsi_tgt.o 00:05:03.536 CC test/event/event_perf/event_perf.o 00:05:03.536 CXX test/cpp_headers/barrier.o 00:05:03.536 LINK vtophys 00:05:03.536 LINK mem_callbacks 00:05:03.536 LINK lsvmd 00:05:03.796 LINK ioat_perf 00:05:03.796 CXX test/cpp_headers/base64.o 00:05:03.796 CXX test/cpp_headers/bdev.o 00:05:03.796 LINK event_perf 00:05:03.796 LINK iscsi_tgt 00:05:03.796 CC app/spdk_tgt/spdk_tgt.o 00:05:03.796 LINK nvme_fuzz 00:05:03.796 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:03.796 CC examples/ioat/verify/verify.o 00:05:03.796 CC examples/idxd/perf/perf.o 00:05:03.796 CC examples/vmd/led/led.o 00:05:04.055 CXX test/cpp_headers/bdev_module.o 00:05:04.055 CC test/env/memory/memory_ut.o 00:05:04.055 CC test/event/reactor/reactor.o 00:05:04.055 LINK env_dpdk_post_init 00:05:04.055 LINK led 00:05:04.055 LINK spdk_tgt 00:05:04.055 CC test/env/pci/pci_ut.o 00:05:04.055 LINK verify 00:05:04.055 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:04.055 CXX test/cpp_headers/bdev_zone.o 00:05:04.055 LINK reactor 00:05:04.314 CXX test/cpp_headers/bit_array.o 00:05:04.314 CXX test/cpp_headers/bit_pool.o 00:05:04.314 LINK idxd_perf 00:05:04.314 CXX test/cpp_headers/blob_bdev.o 00:05:04.314 CC app/spdk_lspci/spdk_lspci.o 00:05:04.314 CC test/event/reactor_perf/reactor_perf.o 00:05:04.314 CC test/app/jsoncat/jsoncat.o 00:05:04.574 CC test/app/stub/stub.o 00:05:04.574 CXX test/cpp_headers/blobfs_bdev.o 00:05:04.574 LINK spdk_lspci 00:05:04.574 LINK reactor_perf 00:05:04.574 LINK jsoncat 00:05:04.574 LINK pci_ut 00:05:04.574 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:04.574 CC test/accel/dif/dif.o 00:05:04.574 CXX test/cpp_headers/blobfs.o 00:05:04.574 LINK stub 00:05:04.834 CXX test/cpp_headers/blob.o 00:05:04.834 LINK interrupt_tgt 00:05:04.834 CC app/spdk_nvme_perf/perf.o 00:05:04.834 CC test/event/app_repeat/app_repeat.o 00:05:04.834 CXX test/cpp_headers/conf.o 00:05:05.094 CC test/event/scheduler/scheduler.o 00:05:05.094 LINK app_repeat 00:05:05.094 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:05.094 CC examples/sock/hello_world/hello_sock.o 00:05:05.094 CC examples/thread/thread/thread_ex.o 00:05:05.094 CXX test/cpp_headers/config.o 00:05:05.094 CXX test/cpp_headers/cpuset.o 00:05:05.094 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:05.354 LINK memory_ut 00:05:05.354 CC app/spdk_nvme_identify/identify.o 00:05:05.354 LINK scheduler 00:05:05.354 CXX test/cpp_headers/crc16.o 00:05:05.354 LINK thread 00:05:05.354 LINK hello_sock 00:05:05.354 CXX test/cpp_headers/crc32.o 00:05:05.354 LINK dif 00:05:05.613 CXX test/cpp_headers/crc64.o 00:05:05.613 CC test/blobfs/mkfs/mkfs.o 00:05:05.613 LINK vhost_fuzz 00:05:05.613 CC test/nvme/aer/aer.o 00:05:05.613 CC test/lvol/esnap/esnap.o 00:05:05.613 CC examples/nvme/hello_world/hello_world.o 00:05:05.613 CC test/nvme/reset/reset.o 00:05:05.872 CXX test/cpp_headers/dif.o 00:05:05.872 LINK spdk_nvme_perf 00:05:05.872 CXX test/cpp_headers/dma.o 00:05:05.872 LINK mkfs 00:05:05.872 LINK iscsi_fuzz 00:05:05.872 LINK hello_world 00:05:05.872 CXX test/cpp_headers/endian.o 00:05:05.872 CC test/nvme/sgl/sgl.o 00:05:06.131 LINK aer 00:05:06.131 LINK reset 00:05:06.131 CXX test/cpp_headers/env_dpdk.o 00:05:06.131 CC test/nvme/e2edp/nvme_dp.o 00:05:06.131 CC app/spdk_nvme_discover/discovery_aer.o 00:05:06.131 CXX test/cpp_headers/env.o 00:05:06.131 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:06.131 CC examples/nvme/reconnect/reconnect.o 00:05:06.131 LINK spdk_nvme_identify 00:05:06.391 CC examples/nvme/arbitration/arbitration.o 00:05:06.391 CC app/spdk_top/spdk_top.o 00:05:06.391 LINK sgl 00:05:06.391 LINK nvme_dp 00:05:06.391 CXX test/cpp_headers/event.o 00:05:06.391 LINK spdk_nvme_discover 00:05:06.651 CXX test/cpp_headers/fd_group.o 00:05:06.651 CC examples/nvme/hotplug/hotplug.o 00:05:06.651 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:06.651 CC test/nvme/overhead/overhead.o 00:05:06.651 LINK arbitration 00:05:06.651 LINK reconnect 00:05:06.651 CC examples/nvme/abort/abort.o 00:05:06.651 CXX test/cpp_headers/fd.o 00:05:06.651 LINK cmb_copy 00:05:06.910 LINK nvme_manage 00:05:06.910 LINK hotplug 00:05:06.910 LINK overhead 00:05:06.910 CXX test/cpp_headers/file.o 00:05:06.910 CC test/nvme/err_injection/err_injection.o 00:05:06.910 CC test/nvme/startup/startup.o 00:05:06.910 CXX test/cpp_headers/fsdev.o 00:05:07.170 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:07.170 LINK abort 00:05:07.170 LINK startup 00:05:07.170 LINK err_injection 00:05:07.170 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:07.170 CXX test/cpp_headers/fsdev_module.o 00:05:07.170 CC examples/accel/perf/accel_perf.o 00:05:07.170 CC examples/blob/hello_world/hello_blob.o 00:05:07.170 CXX test/cpp_headers/ftl.o 00:05:07.170 LINK pmr_persistence 00:05:07.431 LINK spdk_top 00:05:07.431 CC test/nvme/reserve/reserve.o 00:05:07.431 CC app/vhost/vhost.o 00:05:07.431 LINK hello_fsdev 00:05:07.431 CC app/spdk_dd/spdk_dd.o 00:05:07.431 LINK hello_blob 00:05:07.431 CXX test/cpp_headers/fuse_dispatcher.o 00:05:07.431 CXX test/cpp_headers/gpt_spec.o 00:05:07.690 LINK vhost 00:05:07.690 CC examples/blob/cli/blobcli.o 00:05:07.690 LINK reserve 00:05:07.690 CXX test/cpp_headers/hexlify.o 00:05:07.690 LINK accel_perf 00:05:07.690 CC test/nvme/simple_copy/simple_copy.o 00:05:07.690 CC app/fio/nvme/fio_plugin.o 00:05:07.690 CXX test/cpp_headers/histogram_data.o 00:05:07.690 CC test/bdev/bdevio/bdevio.o 00:05:07.690 LINK spdk_dd 00:05:07.690 CXX test/cpp_headers/idxd.o 00:05:07.950 CC test/nvme/connect_stress/connect_stress.o 00:05:07.950 CC app/fio/bdev/fio_plugin.o 00:05:07.950 LINK simple_copy 00:05:07.950 CXX test/cpp_headers/idxd_spec.o 00:05:07.950 CXX test/cpp_headers/init.o 00:05:07.950 LINK connect_stress 00:05:08.209 LINK blobcli 00:05:08.209 CC examples/bdev/hello_world/hello_bdev.o 00:05:08.209 CXX test/cpp_headers/ioat.o 00:05:08.209 LINK bdevio 00:05:08.209 CC test/nvme/compliance/nvme_compliance.o 00:05:08.209 CC test/nvme/boot_partition/boot_partition.o 00:05:08.209 CC test/nvme/fused_ordering/fused_ordering.o 00:05:08.209 CXX test/cpp_headers/ioat_spec.o 00:05:08.469 LINK spdk_nvme 00:05:08.469 LINK hello_bdev 00:05:08.469 CC examples/bdev/bdevperf/bdevperf.o 00:05:08.469 LINK boot_partition 00:05:08.469 LINK spdk_bdev 00:05:08.469 CXX test/cpp_headers/iscsi_spec.o 00:05:08.469 CXX test/cpp_headers/json.o 00:05:08.469 LINK fused_ordering 00:05:08.469 CXX test/cpp_headers/jsonrpc.o 00:05:08.469 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:08.469 CXX test/cpp_headers/keyring.o 00:05:08.728 CXX test/cpp_headers/keyring_module.o 00:05:08.728 LINK nvme_compliance 00:05:08.728 CC test/nvme/cuse/cuse.o 00:05:08.728 CC test/nvme/fdp/fdp.o 00:05:08.728 CXX test/cpp_headers/likely.o 00:05:08.728 CXX test/cpp_headers/log.o 00:05:08.728 LINK doorbell_aers 00:05:08.728 CXX test/cpp_headers/lvol.o 00:05:08.728 CXX test/cpp_headers/md5.o 00:05:08.728 CXX test/cpp_headers/memory.o 00:05:08.988 CXX test/cpp_headers/mmio.o 00:05:08.988 CXX test/cpp_headers/nbd.o 00:05:08.988 CXX test/cpp_headers/net.o 00:05:08.988 CXX test/cpp_headers/notify.o 00:05:08.988 CXX test/cpp_headers/nvme.o 00:05:08.988 CXX test/cpp_headers/nvme_intel.o 00:05:08.988 CXX test/cpp_headers/nvme_ocssd.o 00:05:08.988 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:08.988 CXX test/cpp_headers/nvme_spec.o 00:05:08.988 LINK fdp 00:05:08.988 CXX test/cpp_headers/nvme_zns.o 00:05:08.988 CXX test/cpp_headers/nvmf_cmd.o 00:05:09.248 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:09.248 CXX test/cpp_headers/nvmf.o 00:05:09.248 CXX test/cpp_headers/nvmf_spec.o 00:05:09.248 CXX test/cpp_headers/nvmf_transport.o 00:05:09.248 CXX test/cpp_headers/opal.o 00:05:09.248 CXX test/cpp_headers/opal_spec.o 00:05:09.248 CXX test/cpp_headers/pci_ids.o 00:05:09.248 LINK bdevperf 00:05:09.248 CXX test/cpp_headers/pipe.o 00:05:09.248 CXX test/cpp_headers/queue.o 00:05:09.248 CXX test/cpp_headers/reduce.o 00:05:09.508 CXX test/cpp_headers/rpc.o 00:05:09.508 CXX test/cpp_headers/scheduler.o 00:05:09.508 CXX test/cpp_headers/scsi.o 00:05:09.508 CXX test/cpp_headers/scsi_spec.o 00:05:09.508 CXX test/cpp_headers/sock.o 00:05:09.508 CXX test/cpp_headers/stdinc.o 00:05:09.508 CXX test/cpp_headers/string.o 00:05:09.508 CXX test/cpp_headers/thread.o 00:05:09.508 CXX test/cpp_headers/trace.o 00:05:09.508 CXX test/cpp_headers/trace_parser.o 00:05:09.508 CXX test/cpp_headers/tree.o 00:05:09.508 CXX test/cpp_headers/ublk.o 00:05:09.508 CXX test/cpp_headers/util.o 00:05:09.508 CXX test/cpp_headers/uuid.o 00:05:09.768 CXX test/cpp_headers/version.o 00:05:09.768 CXX test/cpp_headers/vfio_user_pci.o 00:05:09.768 CC examples/nvmf/nvmf/nvmf.o 00:05:09.768 CXX test/cpp_headers/vfio_user_spec.o 00:05:09.768 CXX test/cpp_headers/vhost.o 00:05:09.768 CXX test/cpp_headers/vmd.o 00:05:09.768 CXX test/cpp_headers/xor.o 00:05:09.768 CXX test/cpp_headers/zipf.o 00:05:10.028 LINK nvmf 00:05:10.028 LINK cuse 00:05:11.934 LINK esnap 00:05:12.194 00:05:12.194 real 1m19.385s 00:05:12.194 user 6m11.817s 00:05:12.194 sys 1m8.883s 00:05:12.194 ************************************ 00:05:12.194 END TEST make 00:05:12.194 ************************************ 00:05:12.194 03:05:15 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:12.194 03:05:15 make -- common/autotest_common.sh@10 -- $ set +x 00:05:12.194 03:05:15 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:12.194 03:05:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:12.194 03:05:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:12.194 03:05:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:12.194 03:05:15 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:12.194 03:05:15 -- pm/common@44 -- $ pid=6200 00:05:12.194 03:05:15 -- pm/common@50 -- $ kill -TERM 6200 00:05:12.194 03:05:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:12.194 03:05:15 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:12.194 03:05:15 -- pm/common@44 -- $ pid=6202 00:05:12.194 03:05:15 -- pm/common@50 -- $ kill -TERM 6202 00:05:12.194 03:05:15 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:12.194 03:05:15 -- common/autotest_common.sh@1681 -- # lcov --version 00:05:12.194 03:05:15 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:12.453 03:05:15 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:12.453 03:05:15 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.453 03:05:15 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.453 03:05:15 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.453 03:05:15 -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.453 03:05:15 -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.453 03:05:15 -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.453 03:05:15 -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.453 03:05:15 -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.453 03:05:15 -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.453 03:05:15 -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.453 03:05:15 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.453 03:05:15 -- scripts/common.sh@344 -- # case "$op" in 00:05:12.453 03:05:15 -- scripts/common.sh@345 -- # : 1 00:05:12.453 03:05:15 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.453 03:05:15 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.453 03:05:15 -- scripts/common.sh@365 -- # decimal 1 00:05:12.453 03:05:15 -- scripts/common.sh@353 -- # local d=1 00:05:12.453 03:05:15 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.453 03:05:15 -- scripts/common.sh@355 -- # echo 1 00:05:12.453 03:05:15 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.453 03:05:15 -- scripts/common.sh@366 -- # decimal 2 00:05:12.453 03:05:15 -- scripts/common.sh@353 -- # local d=2 00:05:12.453 03:05:15 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.453 03:05:15 -- scripts/common.sh@355 -- # echo 2 00:05:12.453 03:05:15 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.453 03:05:15 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.453 03:05:15 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.453 03:05:15 -- scripts/common.sh@368 -- # return 0 00:05:12.453 03:05:15 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.453 03:05:15 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:12.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.453 --rc genhtml_branch_coverage=1 00:05:12.453 --rc genhtml_function_coverage=1 00:05:12.453 --rc genhtml_legend=1 00:05:12.453 --rc geninfo_all_blocks=1 00:05:12.453 --rc geninfo_unexecuted_blocks=1 00:05:12.453 00:05:12.453 ' 00:05:12.454 03:05:15 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:12.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.454 --rc genhtml_branch_coverage=1 00:05:12.454 --rc genhtml_function_coverage=1 00:05:12.454 --rc genhtml_legend=1 00:05:12.454 --rc geninfo_all_blocks=1 00:05:12.454 --rc geninfo_unexecuted_blocks=1 00:05:12.454 00:05:12.454 ' 00:05:12.454 03:05:15 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:12.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.454 --rc genhtml_branch_coverage=1 00:05:12.454 --rc genhtml_function_coverage=1 00:05:12.454 --rc genhtml_legend=1 00:05:12.454 --rc geninfo_all_blocks=1 00:05:12.454 --rc geninfo_unexecuted_blocks=1 00:05:12.454 00:05:12.454 ' 00:05:12.454 03:05:15 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:12.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.454 --rc genhtml_branch_coverage=1 00:05:12.454 --rc genhtml_function_coverage=1 00:05:12.454 --rc genhtml_legend=1 00:05:12.454 --rc geninfo_all_blocks=1 00:05:12.454 --rc geninfo_unexecuted_blocks=1 00:05:12.454 00:05:12.454 ' 00:05:12.454 03:05:15 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:12.454 03:05:15 -- nvmf/common.sh@7 -- # uname -s 00:05:12.454 03:05:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:12.454 03:05:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:12.454 03:05:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:12.454 03:05:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:12.454 03:05:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:12.454 03:05:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:12.454 03:05:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:12.454 03:05:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:12.454 03:05:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:12.454 03:05:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:12.454 03:05:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:40b5cd40-24b0-458e-bc66-c7aa18c725f1 00:05:12.454 03:05:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=40b5cd40-24b0-458e-bc66-c7aa18c725f1 00:05:12.454 03:05:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:12.454 03:05:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:12.454 03:05:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:12.454 03:05:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:12.454 03:05:15 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:12.454 03:05:15 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:12.454 03:05:15 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:12.454 03:05:15 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:12.454 03:05:15 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:12.454 03:05:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.454 03:05:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.454 03:05:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.454 03:05:15 -- paths/export.sh@5 -- # export PATH 00:05:12.454 03:05:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.454 03:05:15 -- nvmf/common.sh@51 -- # : 0 00:05:12.454 03:05:15 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:12.454 03:05:15 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:12.454 03:05:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:12.454 03:05:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:12.454 03:05:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:12.454 03:05:15 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:12.454 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:12.454 03:05:15 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:12.454 03:05:15 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:12.454 03:05:15 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:12.454 03:05:15 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:12.454 03:05:15 -- spdk/autotest.sh@32 -- # uname -s 00:05:12.454 03:05:15 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:12.454 03:05:15 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:12.454 03:05:15 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:12.454 03:05:15 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:12.454 03:05:15 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:12.454 03:05:15 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:12.454 03:05:15 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:12.454 03:05:15 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:12.454 03:05:15 -- spdk/autotest.sh@48 -- # udevadm_pid=66878 00:05:12.454 03:05:15 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:12.454 03:05:15 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:12.454 03:05:15 -- pm/common@17 -- # local monitor 00:05:12.454 03:05:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:12.454 03:05:15 -- pm/common@21 -- # date +%s 00:05:12.454 03:05:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:12.454 03:05:15 -- pm/common@25 -- # sleep 1 00:05:12.454 03:05:15 -- pm/common@21 -- # date +%s 00:05:12.454 03:05:15 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731899115 00:05:12.454 03:05:15 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731899115 00:05:12.454 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731899115_collect-cpu-load.pm.log 00:05:12.454 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731899115_collect-vmstat.pm.log 00:05:13.833 03:05:16 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:13.833 03:05:16 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:13.833 03:05:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:13.833 03:05:16 -- common/autotest_common.sh@10 -- # set +x 00:05:13.833 03:05:16 -- spdk/autotest.sh@59 -- # create_test_list 00:05:13.833 03:05:16 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:13.833 03:05:16 -- common/autotest_common.sh@10 -- # set +x 00:05:13.834 03:05:17 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:13.834 03:05:17 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:13.834 03:05:17 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:13.834 03:05:17 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:13.834 03:05:17 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:13.834 03:05:17 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:13.834 03:05:17 -- common/autotest_common.sh@1455 -- # uname 00:05:13.834 03:05:17 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:13.834 03:05:17 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:13.834 03:05:17 -- common/autotest_common.sh@1475 -- # uname 00:05:13.834 03:05:17 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:13.834 03:05:17 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:13.834 03:05:17 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:13.834 lcov: LCOV version 1.15 00:05:13.834 03:05:17 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:28.758 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:28.758 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:43.674 03:05:45 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:43.674 03:05:45 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:43.674 03:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:43.674 03:05:45 -- spdk/autotest.sh@78 -- # rm -f 00:05:43.674 03:05:45 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:43.674 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:43.674 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:43.674 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:43.674 03:05:46 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:43.674 03:05:46 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:43.674 03:05:46 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:43.674 03:05:46 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:43.674 03:05:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:43.674 03:05:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:43.674 03:05:46 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:43.674 03:05:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:43.674 03:05:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:43.674 03:05:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:43.674 03:05:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:43.674 03:05:46 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:43.674 03:05:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:43.674 03:05:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:43.674 03:05:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:43.674 03:05:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:43.674 03:05:46 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:43.674 03:05:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:43.674 03:05:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:43.674 03:05:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:43.674 03:05:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:43.674 03:05:46 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:43.674 03:05:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:43.674 03:05:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:43.674 03:05:46 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:43.674 03:05:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:43.674 03:05:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:43.674 03:05:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:43.674 03:05:46 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:43.674 03:05:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:43.674 No valid GPT data, bailing 00:05:43.674 03:05:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:43.674 03:05:46 -- scripts/common.sh@394 -- # pt= 00:05:43.674 03:05:46 -- scripts/common.sh@395 -- # return 1 00:05:43.674 03:05:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:43.674 1+0 records in 00:05:43.674 1+0 records out 00:05:43.674 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00677047 s, 155 MB/s 00:05:43.674 03:05:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:43.674 03:05:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:43.674 03:05:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:43.674 03:05:46 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:43.674 03:05:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:43.674 No valid GPT data, bailing 00:05:43.674 03:05:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:43.674 03:05:46 -- scripts/common.sh@394 -- # pt= 00:05:43.674 03:05:46 -- scripts/common.sh@395 -- # return 1 00:05:43.674 03:05:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:43.674 1+0 records in 00:05:43.674 1+0 records out 00:05:43.674 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00487547 s, 215 MB/s 00:05:43.674 03:05:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:43.674 03:05:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:43.674 03:05:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:43.674 03:05:46 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:43.674 03:05:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:43.674 No valid GPT data, bailing 00:05:43.674 03:05:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:43.674 03:05:46 -- scripts/common.sh@394 -- # pt= 00:05:43.674 03:05:46 -- scripts/common.sh@395 -- # return 1 00:05:43.674 03:05:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:43.674 1+0 records in 00:05:43.674 1+0 records out 00:05:43.674 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00514715 s, 204 MB/s 00:05:43.674 03:05:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:43.674 03:05:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:43.674 03:05:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:43.674 03:05:46 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:43.674 03:05:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:43.674 No valid GPT data, bailing 00:05:43.674 03:05:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:43.674 03:05:46 -- scripts/common.sh@394 -- # pt= 00:05:43.674 03:05:46 -- scripts/common.sh@395 -- # return 1 00:05:43.674 03:05:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:43.674 1+0 records in 00:05:43.674 1+0 records out 00:05:43.674 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00349086 s, 300 MB/s 00:05:43.674 03:05:46 -- spdk/autotest.sh@105 -- # sync 00:05:43.674 03:05:46 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:43.674 03:05:46 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:43.674 03:05:46 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:46.212 03:05:49 -- spdk/autotest.sh@111 -- # uname -s 00:05:46.212 03:05:49 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:46.212 03:05:49 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:46.213 03:05:49 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:46.781 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:46.781 Hugepages 00:05:46.781 node hugesize free / total 00:05:46.781 node0 1048576kB 0 / 0 00:05:46.781 node0 2048kB 0 / 0 00:05:46.781 00:05:46.781 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:46.781 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:46.781 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:47.041 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:47.041 03:05:50 -- spdk/autotest.sh@117 -- # uname -s 00:05:47.041 03:05:50 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:47.041 03:05:50 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:47.041 03:05:50 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:47.979 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:47.979 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.979 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.979 03:05:51 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:48.915 03:05:52 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:48.915 03:05:52 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:48.915 03:05:52 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:48.915 03:05:52 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:48.915 03:05:52 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:48.915 03:05:52 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:48.915 03:05:52 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:48.915 03:05:52 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:48.915 03:05:52 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:48.915 03:05:52 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:48.915 03:05:52 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:49.174 03:05:52 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:49.434 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:49.434 Waiting for block devices as requested 00:05:49.694 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:49.694 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:49.694 03:05:53 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:49.694 03:05:53 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:49.694 03:05:53 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:49.694 03:05:53 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:49.694 03:05:53 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:49.694 03:05:53 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:49.694 03:05:53 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:49.694 03:05:53 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:49.694 03:05:53 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:49.694 03:05:53 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:49.694 03:05:53 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:49.694 03:05:53 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:49.694 03:05:53 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:49.694 03:05:53 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:49.694 03:05:53 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:49.694 03:05:53 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:49.694 03:05:53 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:49.694 03:05:53 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:49.694 03:05:53 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:49.694 03:05:53 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:49.694 03:05:53 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:49.694 03:05:53 -- common/autotest_common.sh@1541 -- # continue 00:05:49.694 03:05:53 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:49.694 03:05:53 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:49.694 03:05:53 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:49.694 03:05:53 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:49.694 03:05:53 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:49.694 03:05:53 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:49.694 03:05:53 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:49.694 03:05:53 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:49.694 03:05:53 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:49.694 03:05:53 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:49.694 03:05:53 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:49.694 03:05:53 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:49.694 03:05:53 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:49.694 03:05:53 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:49.694 03:05:53 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:49.694 03:05:53 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:49.694 03:05:53 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:49.694 03:05:53 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:49.953 03:05:53 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:49.953 03:05:53 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:49.953 03:05:53 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:49.953 03:05:53 -- common/autotest_common.sh@1541 -- # continue 00:05:49.953 03:05:53 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:49.953 03:05:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:49.953 03:05:53 -- common/autotest_common.sh@10 -- # set +x 00:05:49.953 03:05:53 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:49.953 03:05:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:49.953 03:05:53 -- common/autotest_common.sh@10 -- # set +x 00:05:49.953 03:05:53 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:50.523 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:50.782 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:50.782 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:50.782 03:05:54 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:50.782 03:05:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:50.782 03:05:54 -- common/autotest_common.sh@10 -- # set +x 00:05:50.782 03:05:54 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:50.782 03:05:54 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:50.782 03:05:54 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:50.782 03:05:54 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:50.782 03:05:54 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:50.782 03:05:54 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:50.782 03:05:54 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:50.782 03:05:54 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:50.782 03:05:54 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:50.782 03:05:54 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:50.782 03:05:54 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:50.782 03:05:54 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:50.782 03:05:54 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:51.042 03:05:54 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:51.042 03:05:54 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:51.042 03:05:54 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:51.042 03:05:54 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:51.042 03:05:54 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:51.042 03:05:54 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:51.042 03:05:54 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:51.042 03:05:54 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:51.042 03:05:54 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:51.042 03:05:54 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:51.042 03:05:54 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:51.042 03:05:54 -- common/autotest_common.sh@1570 -- # return 0 00:05:51.042 03:05:54 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:51.042 03:05:54 -- common/autotest_common.sh@1578 -- # return 0 00:05:51.042 03:05:54 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:51.042 03:05:54 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:51.042 03:05:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:51.042 03:05:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:51.042 03:05:54 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:51.042 03:05:54 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:51.042 03:05:54 -- common/autotest_common.sh@10 -- # set +x 00:05:51.042 03:05:54 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:51.042 03:05:54 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:51.042 03:05:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.042 03:05:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.042 03:05:54 -- common/autotest_common.sh@10 -- # set +x 00:05:51.042 ************************************ 00:05:51.042 START TEST env 00:05:51.042 ************************************ 00:05:51.042 03:05:54 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:51.042 * Looking for test storage... 00:05:51.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:51.042 03:05:54 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:51.042 03:05:54 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:51.042 03:05:54 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:51.302 03:05:54 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:51.302 03:05:54 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.302 03:05:54 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.302 03:05:54 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.302 03:05:54 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.302 03:05:54 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.302 03:05:54 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.302 03:05:54 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.302 03:05:54 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.302 03:05:54 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.302 03:05:54 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.302 03:05:54 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.302 03:05:54 env -- scripts/common.sh@344 -- # case "$op" in 00:05:51.302 03:05:54 env -- scripts/common.sh@345 -- # : 1 00:05:51.302 03:05:54 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.302 03:05:54 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.302 03:05:54 env -- scripts/common.sh@365 -- # decimal 1 00:05:51.302 03:05:54 env -- scripts/common.sh@353 -- # local d=1 00:05:51.302 03:05:54 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.302 03:05:54 env -- scripts/common.sh@355 -- # echo 1 00:05:51.302 03:05:54 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.302 03:05:54 env -- scripts/common.sh@366 -- # decimal 2 00:05:51.302 03:05:54 env -- scripts/common.sh@353 -- # local d=2 00:05:51.302 03:05:54 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.302 03:05:54 env -- scripts/common.sh@355 -- # echo 2 00:05:51.302 03:05:54 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.302 03:05:54 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.302 03:05:54 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.302 03:05:54 env -- scripts/common.sh@368 -- # return 0 00:05:51.302 03:05:54 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.302 03:05:54 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:51.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.302 --rc genhtml_branch_coverage=1 00:05:51.302 --rc genhtml_function_coverage=1 00:05:51.302 --rc genhtml_legend=1 00:05:51.302 --rc geninfo_all_blocks=1 00:05:51.302 --rc geninfo_unexecuted_blocks=1 00:05:51.302 00:05:51.302 ' 00:05:51.302 03:05:54 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:51.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.302 --rc genhtml_branch_coverage=1 00:05:51.302 --rc genhtml_function_coverage=1 00:05:51.302 --rc genhtml_legend=1 00:05:51.302 --rc geninfo_all_blocks=1 00:05:51.302 --rc geninfo_unexecuted_blocks=1 00:05:51.302 00:05:51.302 ' 00:05:51.302 03:05:54 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:51.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.302 --rc genhtml_branch_coverage=1 00:05:51.302 --rc genhtml_function_coverage=1 00:05:51.302 --rc genhtml_legend=1 00:05:51.302 --rc geninfo_all_blocks=1 00:05:51.302 --rc geninfo_unexecuted_blocks=1 00:05:51.302 00:05:51.302 ' 00:05:51.302 03:05:54 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:51.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.302 --rc genhtml_branch_coverage=1 00:05:51.302 --rc genhtml_function_coverage=1 00:05:51.302 --rc genhtml_legend=1 00:05:51.302 --rc geninfo_all_blocks=1 00:05:51.303 --rc geninfo_unexecuted_blocks=1 00:05:51.303 00:05:51.303 ' 00:05:51.303 03:05:54 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:51.303 03:05:54 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.303 03:05:54 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.303 03:05:54 env -- common/autotest_common.sh@10 -- # set +x 00:05:51.303 ************************************ 00:05:51.303 START TEST env_memory 00:05:51.303 ************************************ 00:05:51.303 03:05:54 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:51.303 00:05:51.303 00:05:51.303 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.303 http://cunit.sourceforge.net/ 00:05:51.303 00:05:51.303 00:05:51.303 Suite: memory 00:05:51.303 Test: alloc and free memory map ...[2024-11-18 03:05:54.766317] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:51.303 passed 00:05:51.303 Test: mem map translation ...[2024-11-18 03:05:54.809440] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:51.303 [2024-11-18 03:05:54.809530] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:51.303 [2024-11-18 03:05:54.809636] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:51.303 [2024-11-18 03:05:54.809678] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:51.303 passed 00:05:51.303 Test: mem map registration ...[2024-11-18 03:05:54.875143] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:51.303 [2024-11-18 03:05:54.875242] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:51.563 passed 00:05:51.563 Test: mem map adjacent registrations ...passed 00:05:51.563 00:05:51.563 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.563 suites 1 1 n/a 0 0 00:05:51.563 tests 4 4 4 0 0 00:05:51.563 asserts 152 152 152 0 n/a 00:05:51.563 00:05:51.563 Elapsed time = 0.238 seconds 00:05:51.563 00:05:51.563 real 0m0.291s 00:05:51.563 user 0m0.248s 00:05:51.563 sys 0m0.032s 00:05:51.563 03:05:54 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.563 03:05:54 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:51.563 ************************************ 00:05:51.563 END TEST env_memory 00:05:51.563 ************************************ 00:05:51.563 03:05:55 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:51.563 03:05:55 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.563 03:05:55 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.563 03:05:55 env -- common/autotest_common.sh@10 -- # set +x 00:05:51.563 ************************************ 00:05:51.563 START TEST env_vtophys 00:05:51.563 ************************************ 00:05:51.563 03:05:55 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:51.563 EAL: lib.eal log level changed from notice to debug 00:05:51.563 EAL: Detected lcore 0 as core 0 on socket 0 00:05:51.563 EAL: Detected lcore 1 as core 0 on socket 0 00:05:51.563 EAL: Detected lcore 2 as core 0 on socket 0 00:05:51.563 EAL: Detected lcore 3 as core 0 on socket 0 00:05:51.563 EAL: Detected lcore 4 as core 0 on socket 0 00:05:51.563 EAL: Detected lcore 5 as core 0 on socket 0 00:05:51.563 EAL: Detected lcore 6 as core 0 on socket 0 00:05:51.563 EAL: Detected lcore 7 as core 0 on socket 0 00:05:51.563 EAL: Detected lcore 8 as core 0 on socket 0 00:05:51.563 EAL: Detected lcore 9 as core 0 on socket 0 00:05:51.563 EAL: Maximum logical cores by configuration: 128 00:05:51.563 EAL: Detected CPU lcores: 10 00:05:51.563 EAL: Detected NUMA nodes: 1 00:05:51.563 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:51.563 EAL: Detected shared linkage of DPDK 00:05:51.563 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:51.563 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:51.563 EAL: Registered [vdev] bus. 00:05:51.563 EAL: bus.vdev log level changed from disabled to notice 00:05:51.563 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:51.563 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:51.563 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:51.563 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:51.563 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:51.563 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:51.563 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:51.563 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:51.563 EAL: No shared files mode enabled, IPC will be disabled 00:05:51.563 EAL: No shared files mode enabled, IPC is disabled 00:05:51.563 EAL: Selected IOVA mode 'PA' 00:05:51.563 EAL: Probing VFIO support... 00:05:51.563 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:51.563 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:51.563 EAL: Ask a virtual area of 0x2e000 bytes 00:05:51.563 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:51.563 EAL: Setting up physically contiguous memory... 00:05:51.563 EAL: Setting maximum number of open files to 524288 00:05:51.563 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:51.563 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:51.563 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.563 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:51.563 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:51.563 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.563 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:51.563 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:51.563 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.563 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:51.563 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:51.563 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.563 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:51.563 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:51.563 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.563 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:51.563 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:51.563 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.563 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:51.563 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:51.563 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.563 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:51.563 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:51.563 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.563 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:51.563 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:51.563 EAL: Hugepages will be freed exactly as allocated. 00:05:51.563 EAL: No shared files mode enabled, IPC is disabled 00:05:51.563 EAL: No shared files mode enabled, IPC is disabled 00:05:51.823 EAL: TSC frequency is ~2290000 KHz 00:05:51.823 EAL: Main lcore 0 is ready (tid=7fbd68a1ea40;cpuset=[0]) 00:05:51.823 EAL: Trying to obtain current memory policy. 00:05:51.823 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.823 EAL: Restoring previous memory policy: 0 00:05:51.823 EAL: request: mp_malloc_sync 00:05:51.823 EAL: No shared files mode enabled, IPC is disabled 00:05:51.823 EAL: Heap on socket 0 was expanded by 2MB 00:05:51.823 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:51.823 EAL: No shared files mode enabled, IPC is disabled 00:05:51.823 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:51.823 EAL: Mem event callback 'spdk:(nil)' registered 00:05:51.823 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:51.823 00:05:51.823 00:05:51.823 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.823 http://cunit.sourceforge.net/ 00:05:51.823 00:05:51.823 00:05:51.823 Suite: components_suite 00:05:52.083 Test: vtophys_malloc_test ...passed 00:05:52.083 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:52.083 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.083 EAL: Restoring previous memory policy: 4 00:05:52.083 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.083 EAL: request: mp_malloc_sync 00:05:52.083 EAL: No shared files mode enabled, IPC is disabled 00:05:52.083 EAL: Heap on socket 0 was expanded by 4MB 00:05:52.083 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.083 EAL: request: mp_malloc_sync 00:05:52.083 EAL: No shared files mode enabled, IPC is disabled 00:05:52.083 EAL: Heap on socket 0 was shrunk by 4MB 00:05:52.083 EAL: Trying to obtain current memory policy. 00:05:52.083 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.083 EAL: Restoring previous memory policy: 4 00:05:52.083 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.083 EAL: request: mp_malloc_sync 00:05:52.083 EAL: No shared files mode enabled, IPC is disabled 00:05:52.083 EAL: Heap on socket 0 was expanded by 6MB 00:05:52.083 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.083 EAL: request: mp_malloc_sync 00:05:52.083 EAL: No shared files mode enabled, IPC is disabled 00:05:52.083 EAL: Heap on socket 0 was shrunk by 6MB 00:05:52.083 EAL: Trying to obtain current memory policy. 00:05:52.083 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.083 EAL: Restoring previous memory policy: 4 00:05:52.083 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.083 EAL: request: mp_malloc_sync 00:05:52.083 EAL: No shared files mode enabled, IPC is disabled 00:05:52.083 EAL: Heap on socket 0 was expanded by 10MB 00:05:52.083 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.083 EAL: request: mp_malloc_sync 00:05:52.083 EAL: No shared files mode enabled, IPC is disabled 00:05:52.083 EAL: Heap on socket 0 was shrunk by 10MB 00:05:52.083 EAL: Trying to obtain current memory policy. 00:05:52.083 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.083 EAL: Restoring previous memory policy: 4 00:05:52.083 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.083 EAL: request: mp_malloc_sync 00:05:52.083 EAL: No shared files mode enabled, IPC is disabled 00:05:52.083 EAL: Heap on socket 0 was expanded by 18MB 00:05:52.083 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.083 EAL: request: mp_malloc_sync 00:05:52.083 EAL: No shared files mode enabled, IPC is disabled 00:05:52.083 EAL: Heap on socket 0 was shrunk by 18MB 00:05:52.083 EAL: Trying to obtain current memory policy. 00:05:52.083 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.083 EAL: Restoring previous memory policy: 4 00:05:52.083 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.083 EAL: request: mp_malloc_sync 00:05:52.083 EAL: No shared files mode enabled, IPC is disabled 00:05:52.083 EAL: Heap on socket 0 was expanded by 34MB 00:05:52.083 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.083 EAL: request: mp_malloc_sync 00:05:52.083 EAL: No shared files mode enabled, IPC is disabled 00:05:52.083 EAL: Heap on socket 0 was shrunk by 34MB 00:05:52.083 EAL: Trying to obtain current memory policy. 00:05:52.083 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.083 EAL: Restoring previous memory policy: 4 00:05:52.083 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.083 EAL: request: mp_malloc_sync 00:05:52.083 EAL: No shared files mode enabled, IPC is disabled 00:05:52.083 EAL: Heap on socket 0 was expanded by 66MB 00:05:52.083 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.083 EAL: request: mp_malloc_sync 00:05:52.083 EAL: No shared files mode enabled, IPC is disabled 00:05:52.083 EAL: Heap on socket 0 was shrunk by 66MB 00:05:52.083 EAL: Trying to obtain current memory policy. 00:05:52.083 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.343 EAL: Restoring previous memory policy: 4 00:05:52.343 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.344 EAL: request: mp_malloc_sync 00:05:52.344 EAL: No shared files mode enabled, IPC is disabled 00:05:52.344 EAL: Heap on socket 0 was expanded by 130MB 00:05:52.344 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.344 EAL: request: mp_malloc_sync 00:05:52.344 EAL: No shared files mode enabled, IPC is disabled 00:05:52.344 EAL: Heap on socket 0 was shrunk by 130MB 00:05:52.344 EAL: Trying to obtain current memory policy. 00:05:52.344 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.344 EAL: Restoring previous memory policy: 4 00:05:52.344 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.344 EAL: request: mp_malloc_sync 00:05:52.344 EAL: No shared files mode enabled, IPC is disabled 00:05:52.344 EAL: Heap on socket 0 was expanded by 258MB 00:05:52.344 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.344 EAL: request: mp_malloc_sync 00:05:52.344 EAL: No shared files mode enabled, IPC is disabled 00:05:52.344 EAL: Heap on socket 0 was shrunk by 258MB 00:05:52.344 EAL: Trying to obtain current memory policy. 00:05:52.344 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.603 EAL: Restoring previous memory policy: 4 00:05:52.603 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.603 EAL: request: mp_malloc_sync 00:05:52.603 EAL: No shared files mode enabled, IPC is disabled 00:05:52.603 EAL: Heap on socket 0 was expanded by 514MB 00:05:52.603 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.603 EAL: request: mp_malloc_sync 00:05:52.603 EAL: No shared files mode enabled, IPC is disabled 00:05:52.603 EAL: Heap on socket 0 was shrunk by 514MB 00:05:52.603 EAL: Trying to obtain current memory policy. 00:05:52.603 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.863 EAL: Restoring previous memory policy: 4 00:05:52.863 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.863 EAL: request: mp_malloc_sync 00:05:52.863 EAL: No shared files mode enabled, IPC is disabled 00:05:52.863 EAL: Heap on socket 0 was expanded by 1026MB 00:05:53.124 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.124 passed 00:05:53.124 00:05:53.124 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.124 suites 1 1 n/a 0 0 00:05:53.124 tests 2 2 2 0 0 00:05:53.124 asserts 5302 5302 5302 0 n/a 00:05:53.124 00:05:53.124 Elapsed time = 1.349 seconds 00:05:53.124 EAL: request: mp_malloc_sync 00:05:53.124 EAL: No shared files mode enabled, IPC is disabled 00:05:53.124 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:53.124 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.124 EAL: request: mp_malloc_sync 00:05:53.124 EAL: No shared files mode enabled, IPC is disabled 00:05:53.124 EAL: Heap on socket 0 was shrunk by 2MB 00:05:53.124 EAL: No shared files mode enabled, IPC is disabled 00:05:53.124 EAL: No shared files mode enabled, IPC is disabled 00:05:53.124 EAL: No shared files mode enabled, IPC is disabled 00:05:53.124 00:05:53.124 real 0m1.617s 00:05:53.124 user 0m0.779s 00:05:53.124 sys 0m0.702s 00:05:53.124 03:05:56 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.124 03:05:56 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:53.124 ************************************ 00:05:53.124 END TEST env_vtophys 00:05:53.124 ************************************ 00:05:53.386 03:05:56 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:53.386 03:05:56 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.386 03:05:56 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.386 03:05:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:53.386 ************************************ 00:05:53.386 START TEST env_pci 00:05:53.386 ************************************ 00:05:53.386 03:05:56 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:53.386 00:05:53.386 00:05:53.386 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.386 http://cunit.sourceforge.net/ 00:05:53.386 00:05:53.386 00:05:53.386 Suite: pci 00:05:53.386 Test: pci_hook ...[2024-11-18 03:05:56.765910] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69121 has claimed it 00:05:53.386 passed 00:05:53.386 00:05:53.387 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.387 suites 1 1 n/a 0 0 00:05:53.387 tests 1 1 1 0 0 00:05:53.387 asserts 25 25 25 0 n/a 00:05:53.387 00:05:53.387 Elapsed time = 0.006 seconds 00:05:53.387 EAL: Cannot find device (10000:00:01.0) 00:05:53.387 EAL: Failed to attach device on primary process 00:05:53.387 00:05:53.387 real 0m0.097s 00:05:53.387 user 0m0.042s 00:05:53.387 sys 0m0.054s 00:05:53.387 03:05:56 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.387 03:05:56 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:53.387 ************************************ 00:05:53.387 END TEST env_pci 00:05:53.387 ************************************ 00:05:53.387 03:05:56 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:53.387 03:05:56 env -- env/env.sh@15 -- # uname 00:05:53.387 03:05:56 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:53.387 03:05:56 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:53.387 03:05:56 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:53.387 03:05:56 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:53.387 03:05:56 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.387 03:05:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:53.387 ************************************ 00:05:53.387 START TEST env_dpdk_post_init 00:05:53.387 ************************************ 00:05:53.387 03:05:56 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:53.387 EAL: Detected CPU lcores: 10 00:05:53.387 EAL: Detected NUMA nodes: 1 00:05:53.387 EAL: Detected shared linkage of DPDK 00:05:53.647 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:53.647 EAL: Selected IOVA mode 'PA' 00:05:53.647 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:53.647 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:53.647 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:53.647 Starting DPDK initialization... 00:05:53.647 Starting SPDK post initialization... 00:05:53.647 SPDK NVMe probe 00:05:53.647 Attaching to 0000:00:10.0 00:05:53.647 Attaching to 0000:00:11.0 00:05:53.647 Attached to 0000:00:10.0 00:05:53.647 Attached to 0000:00:11.0 00:05:53.647 Cleaning up... 00:05:53.647 00:05:53.647 real 0m0.252s 00:05:53.647 user 0m0.067s 00:05:53.647 sys 0m0.085s 00:05:53.647 03:05:57 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.647 ************************************ 00:05:53.647 END TEST env_dpdk_post_init 00:05:53.647 ************************************ 00:05:53.647 03:05:57 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:53.647 03:05:57 env -- env/env.sh@26 -- # uname 00:05:53.647 03:05:57 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:53.647 03:05:57 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:53.647 03:05:57 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.647 03:05:57 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.647 03:05:57 env -- common/autotest_common.sh@10 -- # set +x 00:05:53.647 ************************************ 00:05:53.647 START TEST env_mem_callbacks 00:05:53.647 ************************************ 00:05:53.647 03:05:57 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:53.906 EAL: Detected CPU lcores: 10 00:05:53.906 EAL: Detected NUMA nodes: 1 00:05:53.906 EAL: Detected shared linkage of DPDK 00:05:53.906 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:53.906 EAL: Selected IOVA mode 'PA' 00:05:53.906 00:05:53.906 00:05:53.906 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.906 http://cunit.sourceforge.net/ 00:05:53.906 00:05:53.906 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:53.906 00:05:53.906 Suite: memory 00:05:53.906 Test: test ... 00:05:53.906 register 0x200000200000 2097152 00:05:53.906 malloc 3145728 00:05:53.906 register 0x200000400000 4194304 00:05:53.906 buf 0x200000500000 len 3145728 PASSED 00:05:53.906 malloc 64 00:05:53.906 buf 0x2000004fff40 len 64 PASSED 00:05:53.906 malloc 4194304 00:05:53.906 register 0x200000800000 6291456 00:05:53.906 buf 0x200000a00000 len 4194304 PASSED 00:05:53.906 free 0x200000500000 3145728 00:05:53.906 free 0x2000004fff40 64 00:05:53.906 unregister 0x200000400000 4194304 PASSED 00:05:53.906 free 0x200000a00000 4194304 00:05:53.906 unregister 0x200000800000 6291456 PASSED 00:05:53.906 malloc 8388608 00:05:53.906 register 0x200000400000 10485760 00:05:53.906 buf 0x200000600000 len 8388608 PASSED 00:05:53.906 free 0x200000600000 8388608 00:05:53.906 unregister 0x200000400000 10485760 PASSED 00:05:53.906 passed 00:05:53.906 00:05:53.906 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.906 suites 1 1 n/a 0 0 00:05:53.906 tests 1 1 1 0 0 00:05:53.906 asserts 15 15 15 0 n/a 00:05:53.906 00:05:53.906 Elapsed time = 0.011 seconds 00:05:53.906 00:05:53.906 real 0m0.202s 00:05:53.906 user 0m0.030s 00:05:53.906 sys 0m0.070s 00:05:53.906 03:05:57 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.906 03:05:57 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:53.906 ************************************ 00:05:53.906 END TEST env_mem_callbacks 00:05:53.906 ************************************ 00:05:53.906 00:05:53.906 real 0m3.009s 00:05:53.906 user 0m1.384s 00:05:53.906 sys 0m1.285s 00:05:53.906 ************************************ 00:05:53.906 END TEST env 00:05:53.906 ************************************ 00:05:53.906 03:05:57 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.906 03:05:57 env -- common/autotest_common.sh@10 -- # set +x 00:05:54.166 03:05:57 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:54.166 03:05:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.166 03:05:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.166 03:05:57 -- common/autotest_common.sh@10 -- # set +x 00:05:54.166 ************************************ 00:05:54.166 START TEST rpc 00:05:54.166 ************************************ 00:05:54.166 03:05:57 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:54.166 * Looking for test storage... 00:05:54.166 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:54.166 03:05:57 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:54.166 03:05:57 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:54.166 03:05:57 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:54.166 03:05:57 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:54.166 03:05:57 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.166 03:05:57 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.166 03:05:57 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.166 03:05:57 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.166 03:05:57 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.166 03:05:57 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.166 03:05:57 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.166 03:05:57 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.166 03:05:57 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.166 03:05:57 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.166 03:05:57 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.166 03:05:57 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:54.166 03:05:57 rpc -- scripts/common.sh@345 -- # : 1 00:05:54.166 03:05:57 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.166 03:05:57 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.166 03:05:57 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:54.166 03:05:57 rpc -- scripts/common.sh@353 -- # local d=1 00:05:54.166 03:05:57 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.166 03:05:57 rpc -- scripts/common.sh@355 -- # echo 1 00:05:54.166 03:05:57 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.166 03:05:57 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:54.166 03:05:57 rpc -- scripts/common.sh@353 -- # local d=2 00:05:54.166 03:05:57 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.426 03:05:57 rpc -- scripts/common.sh@355 -- # echo 2 00:05:54.426 03:05:57 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.426 03:05:57 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.426 03:05:57 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.426 03:05:57 rpc -- scripts/common.sh@368 -- # return 0 00:05:54.426 03:05:57 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.426 03:05:57 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:54.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.426 --rc genhtml_branch_coverage=1 00:05:54.426 --rc genhtml_function_coverage=1 00:05:54.426 --rc genhtml_legend=1 00:05:54.426 --rc geninfo_all_blocks=1 00:05:54.426 --rc geninfo_unexecuted_blocks=1 00:05:54.426 00:05:54.426 ' 00:05:54.426 03:05:57 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:54.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.426 --rc genhtml_branch_coverage=1 00:05:54.426 --rc genhtml_function_coverage=1 00:05:54.426 --rc genhtml_legend=1 00:05:54.426 --rc geninfo_all_blocks=1 00:05:54.426 --rc geninfo_unexecuted_blocks=1 00:05:54.426 00:05:54.426 ' 00:05:54.426 03:05:57 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:54.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.426 --rc genhtml_branch_coverage=1 00:05:54.426 --rc genhtml_function_coverage=1 00:05:54.426 --rc genhtml_legend=1 00:05:54.426 --rc geninfo_all_blocks=1 00:05:54.426 --rc geninfo_unexecuted_blocks=1 00:05:54.426 00:05:54.426 ' 00:05:54.426 03:05:57 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:54.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.426 --rc genhtml_branch_coverage=1 00:05:54.426 --rc genhtml_function_coverage=1 00:05:54.426 --rc genhtml_legend=1 00:05:54.426 --rc geninfo_all_blocks=1 00:05:54.426 --rc geninfo_unexecuted_blocks=1 00:05:54.426 00:05:54.426 ' 00:05:54.426 03:05:57 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69245 00:05:54.426 03:05:57 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:54.427 03:05:57 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.427 03:05:57 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69245 00:05:54.427 03:05:57 rpc -- common/autotest_common.sh@831 -- # '[' -z 69245 ']' 00:05:54.427 03:05:57 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.427 03:05:57 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:54.427 03:05:57 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.427 03:05:57 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:54.427 03:05:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.427 [2024-11-18 03:05:57.847501] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:54.427 [2024-11-18 03:05:57.847739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69245 ] 00:05:54.687 [2024-11-18 03:05:58.009069] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.687 [2024-11-18 03:05:58.059530] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:54.687 [2024-11-18 03:05:58.059586] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69245' to capture a snapshot of events at runtime. 00:05:54.687 [2024-11-18 03:05:58.059598] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:54.687 [2024-11-18 03:05:58.059622] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:54.687 [2024-11-18 03:05:58.059635] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69245 for offline analysis/debug. 00:05:54.687 [2024-11-18 03:05:58.059668] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.257 03:05:58 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.257 03:05:58 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:55.257 03:05:58 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:55.257 03:05:58 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:55.257 03:05:58 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:55.257 03:05:58 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:55.257 03:05:58 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.257 03:05:58 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.257 03:05:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.257 ************************************ 00:05:55.257 START TEST rpc_integrity 00:05:55.257 ************************************ 00:05:55.257 03:05:58 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:55.257 03:05:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:55.257 03:05:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.257 03:05:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.257 03:05:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.257 03:05:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:55.257 03:05:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:55.257 03:05:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:55.257 03:05:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:55.257 03:05:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.257 03:05:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.257 03:05:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.257 03:05:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:55.257 03:05:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:55.257 03:05:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.257 03:05:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.257 03:05:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.257 03:05:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:55.257 { 00:05:55.257 "name": "Malloc0", 00:05:55.257 "aliases": [ 00:05:55.257 "1d485b4e-5887-4c20-b062-5b8ac1cb3804" 00:05:55.257 ], 00:05:55.257 "product_name": "Malloc disk", 00:05:55.257 "block_size": 512, 00:05:55.257 "num_blocks": 16384, 00:05:55.257 "uuid": "1d485b4e-5887-4c20-b062-5b8ac1cb3804", 00:05:55.257 "assigned_rate_limits": { 00:05:55.257 "rw_ios_per_sec": 0, 00:05:55.257 "rw_mbytes_per_sec": 0, 00:05:55.257 "r_mbytes_per_sec": 0, 00:05:55.257 "w_mbytes_per_sec": 0 00:05:55.257 }, 00:05:55.257 "claimed": false, 00:05:55.257 "zoned": false, 00:05:55.257 "supported_io_types": { 00:05:55.257 "read": true, 00:05:55.257 "write": true, 00:05:55.257 "unmap": true, 00:05:55.257 "flush": true, 00:05:55.257 "reset": true, 00:05:55.257 "nvme_admin": false, 00:05:55.257 "nvme_io": false, 00:05:55.257 "nvme_io_md": false, 00:05:55.257 "write_zeroes": true, 00:05:55.257 "zcopy": true, 00:05:55.257 "get_zone_info": false, 00:05:55.257 "zone_management": false, 00:05:55.257 "zone_append": false, 00:05:55.257 "compare": false, 00:05:55.257 "compare_and_write": false, 00:05:55.257 "abort": true, 00:05:55.257 "seek_hole": false, 00:05:55.257 "seek_data": false, 00:05:55.257 "copy": true, 00:05:55.257 "nvme_iov_md": false 00:05:55.257 }, 00:05:55.257 "memory_domains": [ 00:05:55.257 { 00:05:55.257 "dma_device_id": "system", 00:05:55.257 "dma_device_type": 1 00:05:55.257 }, 00:05:55.257 { 00:05:55.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.257 "dma_device_type": 2 00:05:55.257 } 00:05:55.257 ], 00:05:55.257 "driver_specific": {} 00:05:55.257 } 00:05:55.257 ]' 00:05:55.257 03:05:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:55.518 03:05:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:55.518 03:05:58 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:55.518 03:05:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.518 03:05:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.518 [2024-11-18 03:05:58.861663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:55.518 [2024-11-18 03:05:58.861753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:55.518 [2024-11-18 03:05:58.861800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:55.518 [2024-11-18 03:05:58.861812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:55.518 [2024-11-18 03:05:58.864460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:55.518 [2024-11-18 03:05:58.864548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:55.518 Passthru0 00:05:55.518 03:05:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.518 03:05:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:55.518 03:05:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.518 03:05:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.518 03:05:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.518 03:05:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:55.518 { 00:05:55.518 "name": "Malloc0", 00:05:55.518 "aliases": [ 00:05:55.518 "1d485b4e-5887-4c20-b062-5b8ac1cb3804" 00:05:55.518 ], 00:05:55.518 "product_name": "Malloc disk", 00:05:55.518 "block_size": 512, 00:05:55.518 "num_blocks": 16384, 00:05:55.518 "uuid": "1d485b4e-5887-4c20-b062-5b8ac1cb3804", 00:05:55.518 "assigned_rate_limits": { 00:05:55.518 "rw_ios_per_sec": 0, 00:05:55.518 "rw_mbytes_per_sec": 0, 00:05:55.518 "r_mbytes_per_sec": 0, 00:05:55.518 "w_mbytes_per_sec": 0 00:05:55.518 }, 00:05:55.518 "claimed": true, 00:05:55.518 "claim_type": "exclusive_write", 00:05:55.518 "zoned": false, 00:05:55.518 "supported_io_types": { 00:05:55.518 "read": true, 00:05:55.518 "write": true, 00:05:55.518 "unmap": true, 00:05:55.518 "flush": true, 00:05:55.518 "reset": true, 00:05:55.518 "nvme_admin": false, 00:05:55.518 "nvme_io": false, 00:05:55.518 "nvme_io_md": false, 00:05:55.518 "write_zeroes": true, 00:05:55.518 "zcopy": true, 00:05:55.518 "get_zone_info": false, 00:05:55.518 "zone_management": false, 00:05:55.518 "zone_append": false, 00:05:55.518 "compare": false, 00:05:55.518 "compare_and_write": false, 00:05:55.518 "abort": true, 00:05:55.518 "seek_hole": false, 00:05:55.518 "seek_data": false, 00:05:55.518 "copy": true, 00:05:55.518 "nvme_iov_md": false 00:05:55.518 }, 00:05:55.518 "memory_domains": [ 00:05:55.518 { 00:05:55.518 "dma_device_id": "system", 00:05:55.518 "dma_device_type": 1 00:05:55.518 }, 00:05:55.518 { 00:05:55.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.518 "dma_device_type": 2 00:05:55.518 } 00:05:55.518 ], 00:05:55.518 "driver_specific": {} 00:05:55.518 }, 00:05:55.518 { 00:05:55.518 "name": "Passthru0", 00:05:55.518 "aliases": [ 00:05:55.518 "6ae8d3ca-7a48-5391-9ce1-c59e40116351" 00:05:55.518 ], 00:05:55.518 "product_name": "passthru", 00:05:55.518 "block_size": 512, 00:05:55.518 "num_blocks": 16384, 00:05:55.518 "uuid": "6ae8d3ca-7a48-5391-9ce1-c59e40116351", 00:05:55.518 "assigned_rate_limits": { 00:05:55.518 "rw_ios_per_sec": 0, 00:05:55.518 "rw_mbytes_per_sec": 0, 00:05:55.518 "r_mbytes_per_sec": 0, 00:05:55.518 "w_mbytes_per_sec": 0 00:05:55.518 }, 00:05:55.518 "claimed": false, 00:05:55.518 "zoned": false, 00:05:55.518 "supported_io_types": { 00:05:55.518 "read": true, 00:05:55.518 "write": true, 00:05:55.518 "unmap": true, 00:05:55.518 "flush": true, 00:05:55.518 "reset": true, 00:05:55.518 "nvme_admin": false, 00:05:55.518 "nvme_io": false, 00:05:55.518 "nvme_io_md": false, 00:05:55.518 "write_zeroes": true, 00:05:55.518 "zcopy": true, 00:05:55.518 "get_zone_info": false, 00:05:55.518 "zone_management": false, 00:05:55.518 "zone_append": false, 00:05:55.518 "compare": false, 00:05:55.518 "compare_and_write": false, 00:05:55.518 "abort": true, 00:05:55.518 "seek_hole": false, 00:05:55.518 "seek_data": false, 00:05:55.518 "copy": true, 00:05:55.518 "nvme_iov_md": false 00:05:55.518 }, 00:05:55.518 "memory_domains": [ 00:05:55.518 { 00:05:55.518 "dma_device_id": "system", 00:05:55.518 "dma_device_type": 1 00:05:55.518 }, 00:05:55.518 { 00:05:55.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.518 "dma_device_type": 2 00:05:55.518 } 00:05:55.518 ], 00:05:55.518 "driver_specific": { 00:05:55.518 "passthru": { 00:05:55.518 "name": "Passthru0", 00:05:55.518 "base_bdev_name": "Malloc0" 00:05:55.518 } 00:05:55.518 } 00:05:55.518 } 00:05:55.518 ]' 00:05:55.518 03:05:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:55.518 03:05:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:55.518 03:05:58 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:55.518 03:05:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.518 03:05:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.518 03:05:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.518 03:05:58 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:55.518 03:05:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.518 03:05:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.518 03:05:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.518 03:05:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:55.518 03:05:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.518 03:05:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.518 03:05:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.518 03:05:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:55.518 03:05:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:55.518 ************************************ 00:05:55.518 END TEST rpc_integrity 00:05:55.518 ************************************ 00:05:55.518 03:05:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:55.518 00:05:55.518 real 0m0.327s 00:05:55.518 user 0m0.188s 00:05:55.518 sys 0m0.061s 00:05:55.518 03:05:59 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.518 03:05:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.518 03:05:59 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:55.518 03:05:59 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.518 03:05:59 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.518 03:05:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.518 ************************************ 00:05:55.518 START TEST rpc_plugins 00:05:55.518 ************************************ 00:05:55.518 03:05:59 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:55.778 03:05:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:55.778 03:05:59 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.778 03:05:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:55.778 03:05:59 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.778 03:05:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:55.778 03:05:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:55.778 03:05:59 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.778 03:05:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:55.778 03:05:59 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.778 03:05:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:55.778 { 00:05:55.778 "name": "Malloc1", 00:05:55.778 "aliases": [ 00:05:55.778 "a370ad4c-60fd-414a-9f58-1fad52106540" 00:05:55.778 ], 00:05:55.778 "product_name": "Malloc disk", 00:05:55.778 "block_size": 4096, 00:05:55.778 "num_blocks": 256, 00:05:55.778 "uuid": "a370ad4c-60fd-414a-9f58-1fad52106540", 00:05:55.779 "assigned_rate_limits": { 00:05:55.779 "rw_ios_per_sec": 0, 00:05:55.779 "rw_mbytes_per_sec": 0, 00:05:55.779 "r_mbytes_per_sec": 0, 00:05:55.779 "w_mbytes_per_sec": 0 00:05:55.779 }, 00:05:55.779 "claimed": false, 00:05:55.779 "zoned": false, 00:05:55.779 "supported_io_types": { 00:05:55.779 "read": true, 00:05:55.779 "write": true, 00:05:55.779 "unmap": true, 00:05:55.779 "flush": true, 00:05:55.779 "reset": true, 00:05:55.779 "nvme_admin": false, 00:05:55.779 "nvme_io": false, 00:05:55.779 "nvme_io_md": false, 00:05:55.779 "write_zeroes": true, 00:05:55.779 "zcopy": true, 00:05:55.779 "get_zone_info": false, 00:05:55.779 "zone_management": false, 00:05:55.779 "zone_append": false, 00:05:55.779 "compare": false, 00:05:55.779 "compare_and_write": false, 00:05:55.779 "abort": true, 00:05:55.779 "seek_hole": false, 00:05:55.779 "seek_data": false, 00:05:55.779 "copy": true, 00:05:55.779 "nvme_iov_md": false 00:05:55.779 }, 00:05:55.779 "memory_domains": [ 00:05:55.779 { 00:05:55.779 "dma_device_id": "system", 00:05:55.779 "dma_device_type": 1 00:05:55.779 }, 00:05:55.779 { 00:05:55.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.779 "dma_device_type": 2 00:05:55.779 } 00:05:55.779 ], 00:05:55.779 "driver_specific": {} 00:05:55.779 } 00:05:55.779 ]' 00:05:55.779 03:05:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:55.779 03:05:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:55.779 03:05:59 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:55.779 03:05:59 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.779 03:05:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:55.779 03:05:59 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.779 03:05:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:55.779 03:05:59 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.779 03:05:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:55.779 03:05:59 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.779 03:05:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:55.779 03:05:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:55.779 03:05:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:55.779 00:05:55.779 real 0m0.169s 00:05:55.779 user 0m0.099s 00:05:55.779 sys 0m0.031s 00:05:55.779 03:05:59 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.779 03:05:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:55.779 ************************************ 00:05:55.779 END TEST rpc_plugins 00:05:55.779 ************************************ 00:05:55.779 03:05:59 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:55.779 03:05:59 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.779 03:05:59 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.779 03:05:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.779 ************************************ 00:05:55.779 START TEST rpc_trace_cmd_test 00:05:55.779 ************************************ 00:05:55.779 03:05:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:55.779 03:05:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:55.779 03:05:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:55.779 03:05:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.779 03:05:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:55.779 03:05:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.039 03:05:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:56.039 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69245", 00:05:56.039 "tpoint_group_mask": "0x8", 00:05:56.039 "iscsi_conn": { 00:05:56.039 "mask": "0x2", 00:05:56.039 "tpoint_mask": "0x0" 00:05:56.039 }, 00:05:56.039 "scsi": { 00:05:56.039 "mask": "0x4", 00:05:56.039 "tpoint_mask": "0x0" 00:05:56.039 }, 00:05:56.039 "bdev": { 00:05:56.039 "mask": "0x8", 00:05:56.039 "tpoint_mask": "0xffffffffffffffff" 00:05:56.039 }, 00:05:56.039 "nvmf_rdma": { 00:05:56.039 "mask": "0x10", 00:05:56.039 "tpoint_mask": "0x0" 00:05:56.039 }, 00:05:56.039 "nvmf_tcp": { 00:05:56.039 "mask": "0x20", 00:05:56.039 "tpoint_mask": "0x0" 00:05:56.039 }, 00:05:56.039 "ftl": { 00:05:56.039 "mask": "0x40", 00:05:56.039 "tpoint_mask": "0x0" 00:05:56.039 }, 00:05:56.039 "blobfs": { 00:05:56.039 "mask": "0x80", 00:05:56.039 "tpoint_mask": "0x0" 00:05:56.039 }, 00:05:56.039 "dsa": { 00:05:56.039 "mask": "0x200", 00:05:56.039 "tpoint_mask": "0x0" 00:05:56.039 }, 00:05:56.039 "thread": { 00:05:56.039 "mask": "0x400", 00:05:56.039 "tpoint_mask": "0x0" 00:05:56.039 }, 00:05:56.039 "nvme_pcie": { 00:05:56.039 "mask": "0x800", 00:05:56.039 "tpoint_mask": "0x0" 00:05:56.039 }, 00:05:56.039 "iaa": { 00:05:56.039 "mask": "0x1000", 00:05:56.039 "tpoint_mask": "0x0" 00:05:56.039 }, 00:05:56.039 "nvme_tcp": { 00:05:56.039 "mask": "0x2000", 00:05:56.039 "tpoint_mask": "0x0" 00:05:56.039 }, 00:05:56.039 "bdev_nvme": { 00:05:56.039 "mask": "0x4000", 00:05:56.039 "tpoint_mask": "0x0" 00:05:56.039 }, 00:05:56.039 "sock": { 00:05:56.039 "mask": "0x8000", 00:05:56.039 "tpoint_mask": "0x0" 00:05:56.039 }, 00:05:56.039 "blob": { 00:05:56.039 "mask": "0x10000", 00:05:56.039 "tpoint_mask": "0x0" 00:05:56.039 }, 00:05:56.039 "bdev_raid": { 00:05:56.039 "mask": "0x20000", 00:05:56.039 "tpoint_mask": "0x0" 00:05:56.039 } 00:05:56.039 }' 00:05:56.039 03:05:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:56.039 03:05:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:56.039 03:05:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:56.039 03:05:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:56.039 03:05:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:56.039 03:05:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:56.039 03:05:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:56.039 03:05:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:56.039 03:05:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:56.039 ************************************ 00:05:56.039 END TEST rpc_trace_cmd_test 00:05:56.039 ************************************ 00:05:56.039 03:05:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:56.039 00:05:56.039 real 0m0.255s 00:05:56.039 user 0m0.198s 00:05:56.039 sys 0m0.045s 00:05:56.039 03:05:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.039 03:05:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:56.299 03:05:59 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:56.299 03:05:59 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:56.299 03:05:59 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:56.299 03:05:59 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.299 03:05:59 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.299 03:05:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.299 ************************************ 00:05:56.299 START TEST rpc_daemon_integrity 00:05:56.299 ************************************ 00:05:56.299 03:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:56.299 03:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:56.299 03:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.299 03:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.299 03:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.299 03:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:56.299 03:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:56.299 03:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:56.299 03:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:56.299 03:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.299 03:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.299 03:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.299 03:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:56.299 03:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:56.299 03:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.299 03:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.299 03:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.299 03:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:56.299 { 00:05:56.299 "name": "Malloc2", 00:05:56.299 "aliases": [ 00:05:56.299 "f120a232-d574-4564-a0d8-cd970861b4d5" 00:05:56.299 ], 00:05:56.299 "product_name": "Malloc disk", 00:05:56.299 "block_size": 512, 00:05:56.299 "num_blocks": 16384, 00:05:56.299 "uuid": "f120a232-d574-4564-a0d8-cd970861b4d5", 00:05:56.299 "assigned_rate_limits": { 00:05:56.299 "rw_ios_per_sec": 0, 00:05:56.299 "rw_mbytes_per_sec": 0, 00:05:56.299 "r_mbytes_per_sec": 0, 00:05:56.299 "w_mbytes_per_sec": 0 00:05:56.299 }, 00:05:56.299 "claimed": false, 00:05:56.299 "zoned": false, 00:05:56.299 "supported_io_types": { 00:05:56.299 "read": true, 00:05:56.299 "write": true, 00:05:56.299 "unmap": true, 00:05:56.299 "flush": true, 00:05:56.299 "reset": true, 00:05:56.299 "nvme_admin": false, 00:05:56.299 "nvme_io": false, 00:05:56.299 "nvme_io_md": false, 00:05:56.299 "write_zeroes": true, 00:05:56.299 "zcopy": true, 00:05:56.299 "get_zone_info": false, 00:05:56.299 "zone_management": false, 00:05:56.299 "zone_append": false, 00:05:56.299 "compare": false, 00:05:56.299 "compare_and_write": false, 00:05:56.299 "abort": true, 00:05:56.299 "seek_hole": false, 00:05:56.299 "seek_data": false, 00:05:56.299 "copy": true, 00:05:56.299 "nvme_iov_md": false 00:05:56.299 }, 00:05:56.299 "memory_domains": [ 00:05:56.299 { 00:05:56.299 "dma_device_id": "system", 00:05:56.299 "dma_device_type": 1 00:05:56.299 }, 00:05:56.299 { 00:05:56.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:56.299 "dma_device_type": 2 00:05:56.299 } 00:05:56.299 ], 00:05:56.299 "driver_specific": {} 00:05:56.299 } 00:05:56.299 ]' 00:05:56.299 03:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:56.299 03:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:56.299 03:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:56.299 03:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.299 03:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.299 [2024-11-18 03:05:59.792810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:56.299 [2024-11-18 03:05:59.792875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:56.299 [2024-11-18 03:05:59.792899] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:56.299 [2024-11-18 03:05:59.792909] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:56.299 [2024-11-18 03:05:59.795427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:56.299 [2024-11-18 03:05:59.795534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:56.299 Passthru0 00:05:56.299 03:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.299 03:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:56.299 03:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.299 03:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.299 03:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.299 03:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:56.299 { 00:05:56.299 "name": "Malloc2", 00:05:56.299 "aliases": [ 00:05:56.299 "f120a232-d574-4564-a0d8-cd970861b4d5" 00:05:56.299 ], 00:05:56.299 "product_name": "Malloc disk", 00:05:56.299 "block_size": 512, 00:05:56.299 "num_blocks": 16384, 00:05:56.299 "uuid": "f120a232-d574-4564-a0d8-cd970861b4d5", 00:05:56.299 "assigned_rate_limits": { 00:05:56.299 "rw_ios_per_sec": 0, 00:05:56.299 "rw_mbytes_per_sec": 0, 00:05:56.299 "r_mbytes_per_sec": 0, 00:05:56.299 "w_mbytes_per_sec": 0 00:05:56.299 }, 00:05:56.299 "claimed": true, 00:05:56.299 "claim_type": "exclusive_write", 00:05:56.299 "zoned": false, 00:05:56.299 "supported_io_types": { 00:05:56.299 "read": true, 00:05:56.299 "write": true, 00:05:56.299 "unmap": true, 00:05:56.299 "flush": true, 00:05:56.299 "reset": true, 00:05:56.300 "nvme_admin": false, 00:05:56.300 "nvme_io": false, 00:05:56.300 "nvme_io_md": false, 00:05:56.300 "write_zeroes": true, 00:05:56.300 "zcopy": true, 00:05:56.300 "get_zone_info": false, 00:05:56.300 "zone_management": false, 00:05:56.300 "zone_append": false, 00:05:56.300 "compare": false, 00:05:56.300 "compare_and_write": false, 00:05:56.300 "abort": true, 00:05:56.300 "seek_hole": false, 00:05:56.300 "seek_data": false, 00:05:56.300 "copy": true, 00:05:56.300 "nvme_iov_md": false 00:05:56.300 }, 00:05:56.300 "memory_domains": [ 00:05:56.300 { 00:05:56.300 "dma_device_id": "system", 00:05:56.300 "dma_device_type": 1 00:05:56.300 }, 00:05:56.300 { 00:05:56.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:56.300 "dma_device_type": 2 00:05:56.300 } 00:05:56.300 ], 00:05:56.300 "driver_specific": {} 00:05:56.300 }, 00:05:56.300 { 00:05:56.300 "name": "Passthru0", 00:05:56.300 "aliases": [ 00:05:56.300 "b4bfdb60-1cc5-5652-b754-38dd7d3b0827" 00:05:56.300 ], 00:05:56.300 "product_name": "passthru", 00:05:56.300 "block_size": 512, 00:05:56.300 "num_blocks": 16384, 00:05:56.300 "uuid": "b4bfdb60-1cc5-5652-b754-38dd7d3b0827", 00:05:56.300 "assigned_rate_limits": { 00:05:56.300 "rw_ios_per_sec": 0, 00:05:56.300 "rw_mbytes_per_sec": 0, 00:05:56.300 "r_mbytes_per_sec": 0, 00:05:56.300 "w_mbytes_per_sec": 0 00:05:56.300 }, 00:05:56.300 "claimed": false, 00:05:56.300 "zoned": false, 00:05:56.300 "supported_io_types": { 00:05:56.300 "read": true, 00:05:56.300 "write": true, 00:05:56.300 "unmap": true, 00:05:56.300 "flush": true, 00:05:56.300 "reset": true, 00:05:56.300 "nvme_admin": false, 00:05:56.300 "nvme_io": false, 00:05:56.300 "nvme_io_md": false, 00:05:56.300 "write_zeroes": true, 00:05:56.300 "zcopy": true, 00:05:56.300 "get_zone_info": false, 00:05:56.300 "zone_management": false, 00:05:56.300 "zone_append": false, 00:05:56.300 "compare": false, 00:05:56.300 "compare_and_write": false, 00:05:56.300 "abort": true, 00:05:56.300 "seek_hole": false, 00:05:56.300 "seek_data": false, 00:05:56.300 "copy": true, 00:05:56.300 "nvme_iov_md": false 00:05:56.300 }, 00:05:56.300 "memory_domains": [ 00:05:56.300 { 00:05:56.300 "dma_device_id": "system", 00:05:56.300 "dma_device_type": 1 00:05:56.300 }, 00:05:56.300 { 00:05:56.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:56.300 "dma_device_type": 2 00:05:56.300 } 00:05:56.300 ], 00:05:56.300 "driver_specific": { 00:05:56.300 "passthru": { 00:05:56.300 "name": "Passthru0", 00:05:56.300 "base_bdev_name": "Malloc2" 00:05:56.300 } 00:05:56.300 } 00:05:56.300 } 00:05:56.300 ]' 00:05:56.300 03:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:56.559 03:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:56.559 03:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:56.559 03:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.559 03:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.559 03:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.559 03:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:56.559 03:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.559 03:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.559 03:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.559 03:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:56.559 03:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.559 03:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.559 03:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.559 03:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:56.559 03:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:56.559 ************************************ 00:05:56.559 END TEST rpc_daemon_integrity 00:05:56.559 ************************************ 00:05:56.559 03:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:56.559 00:05:56.559 real 0m0.316s 00:05:56.559 user 0m0.194s 00:05:56.559 sys 0m0.050s 00:05:56.559 03:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.559 03:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.559 03:06:00 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:56.559 03:06:00 rpc -- rpc/rpc.sh@84 -- # killprocess 69245 00:05:56.559 03:06:00 rpc -- common/autotest_common.sh@950 -- # '[' -z 69245 ']' 00:05:56.559 03:06:00 rpc -- common/autotest_common.sh@954 -- # kill -0 69245 00:05:56.559 03:06:00 rpc -- common/autotest_common.sh@955 -- # uname 00:05:56.559 03:06:00 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:56.559 03:06:00 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69245 00:05:56.559 killing process with pid 69245 00:05:56.559 03:06:00 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:56.559 03:06:00 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:56.559 03:06:00 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69245' 00:05:56.559 03:06:00 rpc -- common/autotest_common.sh@969 -- # kill 69245 00:05:56.559 03:06:00 rpc -- common/autotest_common.sh@974 -- # wait 69245 00:05:57.129 00:05:57.129 real 0m2.924s 00:05:57.129 user 0m3.538s 00:05:57.129 sys 0m0.864s 00:05:57.129 ************************************ 00:05:57.129 END TEST rpc 00:05:57.129 ************************************ 00:05:57.129 03:06:00 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.129 03:06:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.129 03:06:00 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:57.129 03:06:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.129 03:06:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.129 03:06:00 -- common/autotest_common.sh@10 -- # set +x 00:05:57.129 ************************************ 00:05:57.129 START TEST skip_rpc 00:05:57.129 ************************************ 00:05:57.129 03:06:00 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:57.129 * Looking for test storage... 00:05:57.129 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:57.129 03:06:00 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:57.129 03:06:00 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:57.129 03:06:00 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:57.390 03:06:00 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:57.390 03:06:00 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.390 03:06:00 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.390 03:06:00 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.390 03:06:00 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.390 03:06:00 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.390 03:06:00 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.390 03:06:00 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.390 03:06:00 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.390 03:06:00 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.390 03:06:00 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.390 03:06:00 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.390 03:06:00 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:57.390 03:06:00 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:57.390 03:06:00 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.390 03:06:00 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.390 03:06:00 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:57.390 03:06:00 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:57.390 03:06:00 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.390 03:06:00 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:57.390 03:06:00 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.390 03:06:00 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:57.390 03:06:00 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:57.390 03:06:00 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.390 03:06:00 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:57.390 03:06:00 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.390 03:06:00 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.390 03:06:00 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.390 03:06:00 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:57.390 03:06:00 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.390 03:06:00 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:57.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.390 --rc genhtml_branch_coverage=1 00:05:57.390 --rc genhtml_function_coverage=1 00:05:57.390 --rc genhtml_legend=1 00:05:57.390 --rc geninfo_all_blocks=1 00:05:57.390 --rc geninfo_unexecuted_blocks=1 00:05:57.390 00:05:57.390 ' 00:05:57.390 03:06:00 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:57.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.390 --rc genhtml_branch_coverage=1 00:05:57.390 --rc genhtml_function_coverage=1 00:05:57.390 --rc genhtml_legend=1 00:05:57.390 --rc geninfo_all_blocks=1 00:05:57.390 --rc geninfo_unexecuted_blocks=1 00:05:57.390 00:05:57.390 ' 00:05:57.390 03:06:00 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:57.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.390 --rc genhtml_branch_coverage=1 00:05:57.390 --rc genhtml_function_coverage=1 00:05:57.390 --rc genhtml_legend=1 00:05:57.390 --rc geninfo_all_blocks=1 00:05:57.390 --rc geninfo_unexecuted_blocks=1 00:05:57.390 00:05:57.390 ' 00:05:57.390 03:06:00 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:57.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.390 --rc genhtml_branch_coverage=1 00:05:57.390 --rc genhtml_function_coverage=1 00:05:57.390 --rc genhtml_legend=1 00:05:57.390 --rc geninfo_all_blocks=1 00:05:57.390 --rc geninfo_unexecuted_blocks=1 00:05:57.390 00:05:57.390 ' 00:05:57.390 03:06:00 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:57.390 03:06:00 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:57.390 03:06:00 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:57.390 03:06:00 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.390 03:06:00 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.390 03:06:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.390 ************************************ 00:05:57.390 START TEST skip_rpc 00:05:57.390 ************************************ 00:05:57.390 03:06:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:57.390 03:06:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69451 00:05:57.390 03:06:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:57.390 03:06:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.390 03:06:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:57.390 [2024-11-18 03:06:00.845295] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:57.390 [2024-11-18 03:06:00.845519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69451 ] 00:05:57.650 [2024-11-18 03:06:01.004571] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.651 [2024-11-18 03:06:01.054896] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.931 03:06:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:02.931 03:06:05 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:02.931 03:06:05 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:02.931 03:06:05 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:02.931 03:06:05 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.931 03:06:05 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:02.931 03:06:05 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.931 03:06:05 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:02.931 03:06:05 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.931 03:06:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.931 03:06:05 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:02.931 03:06:05 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:02.931 03:06:05 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:02.931 03:06:05 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:02.931 03:06:05 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:02.931 03:06:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:02.931 03:06:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69451 00:06:02.931 03:06:05 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69451 ']' 00:06:02.931 03:06:05 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69451 00:06:02.931 03:06:05 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:02.931 03:06:05 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.931 03:06:05 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69451 00:06:02.931 03:06:05 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.931 03:06:05 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.931 03:06:05 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69451' 00:06:02.931 killing process with pid 69451 00:06:02.931 03:06:05 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69451 00:06:02.931 03:06:05 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69451 00:06:02.931 00:06:02.931 real 0m5.456s 00:06:02.931 user 0m5.059s 00:06:02.931 sys 0m0.322s 00:06:02.931 03:06:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.931 ************************************ 00:06:02.931 END TEST skip_rpc 00:06:02.931 ************************************ 00:06:02.931 03:06:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.931 03:06:06 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:02.931 03:06:06 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.931 03:06:06 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.931 03:06:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.931 ************************************ 00:06:02.931 START TEST skip_rpc_with_json 00:06:02.931 ************************************ 00:06:02.931 03:06:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:02.931 03:06:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:02.931 03:06:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.931 03:06:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69538 00:06:02.931 03:06:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.931 03:06:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69538 00:06:02.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.931 03:06:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 69538 ']' 00:06:02.931 03:06:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.931 03:06:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.931 03:06:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.931 03:06:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.931 03:06:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.931 [2024-11-18 03:06:06.365597] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:02.931 [2024-11-18 03:06:06.365823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69538 ] 00:06:03.191 [2024-11-18 03:06:06.525459] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.191 [2024-11-18 03:06:06.576925] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.797 03:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.797 03:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:03.797 03:06:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:03.797 03:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.797 03:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:03.797 [2024-11-18 03:06:07.288928] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:03.797 request: 00:06:03.797 { 00:06:03.797 "trtype": "tcp", 00:06:03.797 "method": "nvmf_get_transports", 00:06:03.797 "req_id": 1 00:06:03.797 } 00:06:03.797 Got JSON-RPC error response 00:06:03.797 response: 00:06:03.797 { 00:06:03.797 "code": -19, 00:06:03.797 "message": "No such device" 00:06:03.797 } 00:06:03.797 03:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:03.797 03:06:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:03.797 03:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.797 03:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:03.797 [2024-11-18 03:06:07.301012] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:03.797 03:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.797 03:06:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:03.797 03:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.797 03:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:04.057 03:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.057 03:06:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:04.057 { 00:06:04.057 "subsystems": [ 00:06:04.057 { 00:06:04.057 "subsystem": "fsdev", 00:06:04.057 "config": [ 00:06:04.057 { 00:06:04.057 "method": "fsdev_set_opts", 00:06:04.057 "params": { 00:06:04.057 "fsdev_io_pool_size": 65535, 00:06:04.057 "fsdev_io_cache_size": 256 00:06:04.057 } 00:06:04.057 } 00:06:04.057 ] 00:06:04.057 }, 00:06:04.057 { 00:06:04.057 "subsystem": "keyring", 00:06:04.057 "config": [] 00:06:04.057 }, 00:06:04.057 { 00:06:04.057 "subsystem": "iobuf", 00:06:04.057 "config": [ 00:06:04.057 { 00:06:04.057 "method": "iobuf_set_options", 00:06:04.057 "params": { 00:06:04.057 "small_pool_count": 8192, 00:06:04.057 "large_pool_count": 1024, 00:06:04.057 "small_bufsize": 8192, 00:06:04.057 "large_bufsize": 135168 00:06:04.057 } 00:06:04.057 } 00:06:04.057 ] 00:06:04.057 }, 00:06:04.057 { 00:06:04.057 "subsystem": "sock", 00:06:04.057 "config": [ 00:06:04.057 { 00:06:04.057 "method": "sock_set_default_impl", 00:06:04.057 "params": { 00:06:04.057 "impl_name": "posix" 00:06:04.057 } 00:06:04.057 }, 00:06:04.057 { 00:06:04.057 "method": "sock_impl_set_options", 00:06:04.057 "params": { 00:06:04.057 "impl_name": "ssl", 00:06:04.057 "recv_buf_size": 4096, 00:06:04.057 "send_buf_size": 4096, 00:06:04.057 "enable_recv_pipe": true, 00:06:04.057 "enable_quickack": false, 00:06:04.057 "enable_placement_id": 0, 00:06:04.057 "enable_zerocopy_send_server": true, 00:06:04.057 "enable_zerocopy_send_client": false, 00:06:04.057 "zerocopy_threshold": 0, 00:06:04.057 "tls_version": 0, 00:06:04.057 "enable_ktls": false 00:06:04.057 } 00:06:04.057 }, 00:06:04.057 { 00:06:04.057 "method": "sock_impl_set_options", 00:06:04.057 "params": { 00:06:04.057 "impl_name": "posix", 00:06:04.057 "recv_buf_size": 2097152, 00:06:04.057 "send_buf_size": 2097152, 00:06:04.057 "enable_recv_pipe": true, 00:06:04.057 "enable_quickack": false, 00:06:04.057 "enable_placement_id": 0, 00:06:04.057 "enable_zerocopy_send_server": true, 00:06:04.057 "enable_zerocopy_send_client": false, 00:06:04.057 "zerocopy_threshold": 0, 00:06:04.057 "tls_version": 0, 00:06:04.057 "enable_ktls": false 00:06:04.057 } 00:06:04.057 } 00:06:04.057 ] 00:06:04.057 }, 00:06:04.057 { 00:06:04.057 "subsystem": "vmd", 00:06:04.057 "config": [] 00:06:04.057 }, 00:06:04.057 { 00:06:04.057 "subsystem": "accel", 00:06:04.057 "config": [ 00:06:04.057 { 00:06:04.057 "method": "accel_set_options", 00:06:04.057 "params": { 00:06:04.057 "small_cache_size": 128, 00:06:04.057 "large_cache_size": 16, 00:06:04.057 "task_count": 2048, 00:06:04.057 "sequence_count": 2048, 00:06:04.057 "buf_count": 2048 00:06:04.057 } 00:06:04.057 } 00:06:04.057 ] 00:06:04.057 }, 00:06:04.057 { 00:06:04.058 "subsystem": "bdev", 00:06:04.058 "config": [ 00:06:04.058 { 00:06:04.058 "method": "bdev_set_options", 00:06:04.058 "params": { 00:06:04.058 "bdev_io_pool_size": 65535, 00:06:04.058 "bdev_io_cache_size": 256, 00:06:04.058 "bdev_auto_examine": true, 00:06:04.058 "iobuf_small_cache_size": 128, 00:06:04.058 "iobuf_large_cache_size": 16 00:06:04.058 } 00:06:04.058 }, 00:06:04.058 { 00:06:04.058 "method": "bdev_raid_set_options", 00:06:04.058 "params": { 00:06:04.058 "process_window_size_kb": 1024, 00:06:04.058 "process_max_bandwidth_mb_sec": 0 00:06:04.058 } 00:06:04.058 }, 00:06:04.058 { 00:06:04.058 "method": "bdev_iscsi_set_options", 00:06:04.058 "params": { 00:06:04.058 "timeout_sec": 30 00:06:04.058 } 00:06:04.058 }, 00:06:04.058 { 00:06:04.058 "method": "bdev_nvme_set_options", 00:06:04.058 "params": { 00:06:04.058 "action_on_timeout": "none", 00:06:04.058 "timeout_us": 0, 00:06:04.058 "timeout_admin_us": 0, 00:06:04.058 "keep_alive_timeout_ms": 10000, 00:06:04.058 "arbitration_burst": 0, 00:06:04.058 "low_priority_weight": 0, 00:06:04.058 "medium_priority_weight": 0, 00:06:04.058 "high_priority_weight": 0, 00:06:04.058 "nvme_adminq_poll_period_us": 10000, 00:06:04.058 "nvme_ioq_poll_period_us": 0, 00:06:04.058 "io_queue_requests": 0, 00:06:04.058 "delay_cmd_submit": true, 00:06:04.058 "transport_retry_count": 4, 00:06:04.058 "bdev_retry_count": 3, 00:06:04.058 "transport_ack_timeout": 0, 00:06:04.058 "ctrlr_loss_timeout_sec": 0, 00:06:04.058 "reconnect_delay_sec": 0, 00:06:04.058 "fast_io_fail_timeout_sec": 0, 00:06:04.058 "disable_auto_failback": false, 00:06:04.058 "generate_uuids": false, 00:06:04.058 "transport_tos": 0, 00:06:04.058 "nvme_error_stat": false, 00:06:04.058 "rdma_srq_size": 0, 00:06:04.058 "io_path_stat": false, 00:06:04.058 "allow_accel_sequence": false, 00:06:04.058 "rdma_max_cq_size": 0, 00:06:04.058 "rdma_cm_event_timeout_ms": 0, 00:06:04.058 "dhchap_digests": [ 00:06:04.058 "sha256", 00:06:04.058 "sha384", 00:06:04.058 "sha512" 00:06:04.058 ], 00:06:04.058 "dhchap_dhgroups": [ 00:06:04.058 "null", 00:06:04.058 "ffdhe2048", 00:06:04.058 "ffdhe3072", 00:06:04.058 "ffdhe4096", 00:06:04.058 "ffdhe6144", 00:06:04.058 "ffdhe8192" 00:06:04.058 ] 00:06:04.058 } 00:06:04.058 }, 00:06:04.058 { 00:06:04.058 "method": "bdev_nvme_set_hotplug", 00:06:04.058 "params": { 00:06:04.058 "period_us": 100000, 00:06:04.058 "enable": false 00:06:04.058 } 00:06:04.058 }, 00:06:04.058 { 00:06:04.058 "method": "bdev_wait_for_examine" 00:06:04.058 } 00:06:04.058 ] 00:06:04.058 }, 00:06:04.058 { 00:06:04.058 "subsystem": "scsi", 00:06:04.058 "config": null 00:06:04.058 }, 00:06:04.058 { 00:06:04.058 "subsystem": "scheduler", 00:06:04.058 "config": [ 00:06:04.058 { 00:06:04.058 "method": "framework_set_scheduler", 00:06:04.058 "params": { 00:06:04.058 "name": "static" 00:06:04.058 } 00:06:04.058 } 00:06:04.058 ] 00:06:04.058 }, 00:06:04.058 { 00:06:04.058 "subsystem": "vhost_scsi", 00:06:04.058 "config": [] 00:06:04.058 }, 00:06:04.058 { 00:06:04.058 "subsystem": "vhost_blk", 00:06:04.058 "config": [] 00:06:04.058 }, 00:06:04.058 { 00:06:04.058 "subsystem": "ublk", 00:06:04.058 "config": [] 00:06:04.058 }, 00:06:04.058 { 00:06:04.058 "subsystem": "nbd", 00:06:04.058 "config": [] 00:06:04.058 }, 00:06:04.058 { 00:06:04.058 "subsystem": "nvmf", 00:06:04.058 "config": [ 00:06:04.058 { 00:06:04.058 "method": "nvmf_set_config", 00:06:04.058 "params": { 00:06:04.058 "discovery_filter": "match_any", 00:06:04.058 "admin_cmd_passthru": { 00:06:04.058 "identify_ctrlr": false 00:06:04.058 }, 00:06:04.058 "dhchap_digests": [ 00:06:04.058 "sha256", 00:06:04.058 "sha384", 00:06:04.058 "sha512" 00:06:04.058 ], 00:06:04.058 "dhchap_dhgroups": [ 00:06:04.058 "null", 00:06:04.058 "ffdhe2048", 00:06:04.058 "ffdhe3072", 00:06:04.058 "ffdhe4096", 00:06:04.058 "ffdhe6144", 00:06:04.058 "ffdhe8192" 00:06:04.058 ] 00:06:04.058 } 00:06:04.058 }, 00:06:04.058 { 00:06:04.058 "method": "nvmf_set_max_subsystems", 00:06:04.058 "params": { 00:06:04.058 "max_subsystems": 1024 00:06:04.058 } 00:06:04.058 }, 00:06:04.058 { 00:06:04.058 "method": "nvmf_set_crdt", 00:06:04.058 "params": { 00:06:04.058 "crdt1": 0, 00:06:04.058 "crdt2": 0, 00:06:04.058 "crdt3": 0 00:06:04.058 } 00:06:04.058 }, 00:06:04.058 { 00:06:04.058 "method": "nvmf_create_transport", 00:06:04.058 "params": { 00:06:04.058 "trtype": "TCP", 00:06:04.058 "max_queue_depth": 128, 00:06:04.058 "max_io_qpairs_per_ctrlr": 127, 00:06:04.058 "in_capsule_data_size": 4096, 00:06:04.058 "max_io_size": 131072, 00:06:04.058 "io_unit_size": 131072, 00:06:04.058 "max_aq_depth": 128, 00:06:04.058 "num_shared_buffers": 511, 00:06:04.058 "buf_cache_size": 4294967295, 00:06:04.058 "dif_insert_or_strip": false, 00:06:04.058 "zcopy": false, 00:06:04.058 "c2h_success": true, 00:06:04.058 "sock_priority": 0, 00:06:04.058 "abort_timeout_sec": 1, 00:06:04.058 "ack_timeout": 0, 00:06:04.058 "data_wr_pool_size": 0 00:06:04.058 } 00:06:04.058 } 00:06:04.058 ] 00:06:04.058 }, 00:06:04.058 { 00:06:04.058 "subsystem": "iscsi", 00:06:04.058 "config": [ 00:06:04.058 { 00:06:04.058 "method": "iscsi_set_options", 00:06:04.058 "params": { 00:06:04.058 "node_base": "iqn.2016-06.io.spdk", 00:06:04.058 "max_sessions": 128, 00:06:04.058 "max_connections_per_session": 2, 00:06:04.058 "max_queue_depth": 64, 00:06:04.058 "default_time2wait": 2, 00:06:04.058 "default_time2retain": 20, 00:06:04.058 "first_burst_length": 8192, 00:06:04.058 "immediate_data": true, 00:06:04.058 "allow_duplicated_isid": false, 00:06:04.058 "error_recovery_level": 0, 00:06:04.058 "nop_timeout": 60, 00:06:04.058 "nop_in_interval": 30, 00:06:04.058 "disable_chap": false, 00:06:04.058 "require_chap": false, 00:06:04.058 "mutual_chap": false, 00:06:04.058 "chap_group": 0, 00:06:04.058 "max_large_datain_per_connection": 64, 00:06:04.058 "max_r2t_per_connection": 4, 00:06:04.058 "pdu_pool_size": 36864, 00:06:04.058 "immediate_data_pool_size": 16384, 00:06:04.058 "data_out_pool_size": 2048 00:06:04.058 } 00:06:04.058 } 00:06:04.058 ] 00:06:04.058 } 00:06:04.058 ] 00:06:04.058 } 00:06:04.058 03:06:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:04.058 03:06:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69538 00:06:04.058 03:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69538 ']' 00:06:04.058 03:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69538 00:06:04.058 03:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:04.058 03:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.058 03:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69538 00:06:04.058 killing process with pid 69538 00:06:04.058 03:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:04.058 03:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:04.058 03:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69538' 00:06:04.058 03:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69538 00:06:04.058 03:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69538 00:06:04.627 03:06:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69567 00:06:04.627 03:06:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:04.628 03:06:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:09.906 03:06:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69567 00:06:09.906 03:06:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69567 ']' 00:06:09.906 03:06:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69567 00:06:09.906 03:06:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:09.906 03:06:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.906 03:06:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69567 00:06:09.906 killing process with pid 69567 00:06:09.906 03:06:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.906 03:06:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.906 03:06:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69567' 00:06:09.906 03:06:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69567 00:06:09.906 03:06:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69567 00:06:09.906 03:06:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:09.906 03:06:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:09.906 00:06:09.906 real 0m7.083s 00:06:09.906 user 0m6.723s 00:06:09.906 sys 0m0.737s 00:06:09.906 ************************************ 00:06:09.906 END TEST skip_rpc_with_json 00:06:09.906 ************************************ 00:06:09.906 03:06:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.906 03:06:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:09.906 03:06:13 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:09.906 03:06:13 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.906 03:06:13 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.906 03:06:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.906 ************************************ 00:06:09.906 START TEST skip_rpc_with_delay 00:06:09.906 ************************************ 00:06:09.906 03:06:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:09.906 03:06:13 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:09.906 03:06:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:09.906 03:06:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:09.906 03:06:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:09.906 03:06:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.906 03:06:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:09.906 03:06:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.906 03:06:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:09.906 03:06:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.906 03:06:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:09.906 03:06:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:09.906 03:06:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:10.166 [2024-11-18 03:06:13.516484] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:10.166 [2024-11-18 03:06:13.516629] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:10.166 03:06:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:10.166 03:06:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:10.166 03:06:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:10.166 03:06:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:10.166 00:06:10.166 real 0m0.168s 00:06:10.166 user 0m0.091s 00:06:10.166 sys 0m0.075s 00:06:10.166 03:06:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.166 ************************************ 00:06:10.166 END TEST skip_rpc_with_delay 00:06:10.166 ************************************ 00:06:10.166 03:06:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:10.166 03:06:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:10.166 03:06:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:10.166 03:06:13 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:10.166 03:06:13 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.166 03:06:13 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.166 03:06:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.166 ************************************ 00:06:10.166 START TEST exit_on_failed_rpc_init 00:06:10.166 ************************************ 00:06:10.166 03:06:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:10.166 03:06:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69679 00:06:10.166 03:06:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.166 03:06:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69679 00:06:10.166 03:06:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 69679 ']' 00:06:10.166 03:06:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.166 03:06:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.166 03:06:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.166 03:06:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.166 03:06:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:10.426 [2024-11-18 03:06:13.748616] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:10.426 [2024-11-18 03:06:13.748754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69679 ] 00:06:10.426 [2024-11-18 03:06:13.907855] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.426 [2024-11-18 03:06:13.958160] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.364 03:06:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.364 03:06:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:11.364 03:06:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.364 03:06:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:11.364 03:06:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:11.365 03:06:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:11.365 03:06:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:11.365 03:06:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.365 03:06:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:11.365 03:06:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.365 03:06:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:11.365 03:06:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.365 03:06:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:11.365 03:06:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:11.365 03:06:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:11.365 [2024-11-18 03:06:14.692439] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:11.365 [2024-11-18 03:06:14.692589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69695 ] 00:06:11.365 [2024-11-18 03:06:14.857380] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.365 [2024-11-18 03:06:14.908456] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.365 [2024-11-18 03:06:14.908569] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:11.365 [2024-11-18 03:06:14.908587] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:11.365 [2024-11-18 03:06:14.908602] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:11.624 03:06:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:11.624 03:06:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:11.624 03:06:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:11.624 03:06:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:11.624 03:06:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:11.624 03:06:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:11.624 03:06:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:11.624 03:06:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69679 00:06:11.624 03:06:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 69679 ']' 00:06:11.624 03:06:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 69679 00:06:11.624 03:06:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:11.624 03:06:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:11.624 03:06:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69679 00:06:11.624 03:06:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:11.624 03:06:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:11.624 03:06:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69679' 00:06:11.624 killing process with pid 69679 00:06:11.624 03:06:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 69679 00:06:11.624 03:06:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 69679 00:06:12.194 00:06:12.194 real 0m1.817s 00:06:12.194 user 0m1.988s 00:06:12.194 sys 0m0.523s 00:06:12.194 03:06:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.194 03:06:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:12.194 ************************************ 00:06:12.194 END TEST exit_on_failed_rpc_init 00:06:12.194 ************************************ 00:06:12.194 03:06:15 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:12.194 00:06:12.194 real 0m15.013s 00:06:12.194 user 0m14.060s 00:06:12.194 sys 0m1.966s 00:06:12.194 03:06:15 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.194 03:06:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.194 ************************************ 00:06:12.194 END TEST skip_rpc 00:06:12.194 ************************************ 00:06:12.194 03:06:15 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:12.194 03:06:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.194 03:06:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.194 03:06:15 -- common/autotest_common.sh@10 -- # set +x 00:06:12.194 ************************************ 00:06:12.194 START TEST rpc_client 00:06:12.194 ************************************ 00:06:12.194 03:06:15 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:12.194 * Looking for test storage... 00:06:12.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:12.194 03:06:15 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:12.194 03:06:15 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:12.194 03:06:15 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:12.454 03:06:15 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:12.454 03:06:15 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.454 03:06:15 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.454 03:06:15 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.454 03:06:15 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.454 03:06:15 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.454 03:06:15 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.454 03:06:15 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.454 03:06:15 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.454 03:06:15 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.454 03:06:15 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.454 03:06:15 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.454 03:06:15 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:12.454 03:06:15 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:12.454 03:06:15 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.454 03:06:15 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.454 03:06:15 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:12.454 03:06:15 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:12.454 03:06:15 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.454 03:06:15 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:12.454 03:06:15 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.454 03:06:15 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:12.454 03:06:15 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:12.454 03:06:15 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.454 03:06:15 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:12.454 03:06:15 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.454 03:06:15 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.454 03:06:15 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.454 03:06:15 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:12.454 03:06:15 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.454 03:06:15 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:12.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.454 --rc genhtml_branch_coverage=1 00:06:12.454 --rc genhtml_function_coverage=1 00:06:12.454 --rc genhtml_legend=1 00:06:12.454 --rc geninfo_all_blocks=1 00:06:12.454 --rc geninfo_unexecuted_blocks=1 00:06:12.454 00:06:12.454 ' 00:06:12.454 03:06:15 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:12.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.454 --rc genhtml_branch_coverage=1 00:06:12.454 --rc genhtml_function_coverage=1 00:06:12.454 --rc genhtml_legend=1 00:06:12.454 --rc geninfo_all_blocks=1 00:06:12.454 --rc geninfo_unexecuted_blocks=1 00:06:12.454 00:06:12.454 ' 00:06:12.454 03:06:15 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:12.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.454 --rc genhtml_branch_coverage=1 00:06:12.454 --rc genhtml_function_coverage=1 00:06:12.454 --rc genhtml_legend=1 00:06:12.454 --rc geninfo_all_blocks=1 00:06:12.454 --rc geninfo_unexecuted_blocks=1 00:06:12.454 00:06:12.454 ' 00:06:12.454 03:06:15 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:12.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.454 --rc genhtml_branch_coverage=1 00:06:12.454 --rc genhtml_function_coverage=1 00:06:12.454 --rc genhtml_legend=1 00:06:12.454 --rc geninfo_all_blocks=1 00:06:12.454 --rc geninfo_unexecuted_blocks=1 00:06:12.454 00:06:12.454 ' 00:06:12.454 03:06:15 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:12.454 OK 00:06:12.454 03:06:15 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:12.454 ************************************ 00:06:12.454 END TEST rpc_client 00:06:12.454 ************************************ 00:06:12.454 00:06:12.454 real 0m0.302s 00:06:12.454 user 0m0.171s 00:06:12.454 sys 0m0.145s 00:06:12.454 03:06:15 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.454 03:06:15 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:12.454 03:06:15 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:12.454 03:06:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.454 03:06:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.454 03:06:15 -- common/autotest_common.sh@10 -- # set +x 00:06:12.454 ************************************ 00:06:12.454 START TEST json_config 00:06:12.454 ************************************ 00:06:12.454 03:06:15 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:12.715 03:06:16 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:12.715 03:06:16 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:12.715 03:06:16 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:12.715 03:06:16 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:12.715 03:06:16 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.715 03:06:16 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.715 03:06:16 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.715 03:06:16 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.715 03:06:16 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.715 03:06:16 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.715 03:06:16 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.715 03:06:16 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.715 03:06:16 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.715 03:06:16 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.715 03:06:16 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.715 03:06:16 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:12.715 03:06:16 json_config -- scripts/common.sh@345 -- # : 1 00:06:12.715 03:06:16 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.715 03:06:16 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.715 03:06:16 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:12.715 03:06:16 json_config -- scripts/common.sh@353 -- # local d=1 00:06:12.715 03:06:16 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.715 03:06:16 json_config -- scripts/common.sh@355 -- # echo 1 00:06:12.715 03:06:16 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.715 03:06:16 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:12.715 03:06:16 json_config -- scripts/common.sh@353 -- # local d=2 00:06:12.715 03:06:16 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.715 03:06:16 json_config -- scripts/common.sh@355 -- # echo 2 00:06:12.715 03:06:16 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.715 03:06:16 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.715 03:06:16 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.715 03:06:16 json_config -- scripts/common.sh@368 -- # return 0 00:06:12.715 03:06:16 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.715 03:06:16 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:12.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.715 --rc genhtml_branch_coverage=1 00:06:12.715 --rc genhtml_function_coverage=1 00:06:12.715 --rc genhtml_legend=1 00:06:12.715 --rc geninfo_all_blocks=1 00:06:12.715 --rc geninfo_unexecuted_blocks=1 00:06:12.715 00:06:12.715 ' 00:06:12.715 03:06:16 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:12.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.715 --rc genhtml_branch_coverage=1 00:06:12.715 --rc genhtml_function_coverage=1 00:06:12.715 --rc genhtml_legend=1 00:06:12.715 --rc geninfo_all_blocks=1 00:06:12.715 --rc geninfo_unexecuted_blocks=1 00:06:12.715 00:06:12.715 ' 00:06:12.715 03:06:16 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:12.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.715 --rc genhtml_branch_coverage=1 00:06:12.715 --rc genhtml_function_coverage=1 00:06:12.715 --rc genhtml_legend=1 00:06:12.715 --rc geninfo_all_blocks=1 00:06:12.715 --rc geninfo_unexecuted_blocks=1 00:06:12.715 00:06:12.715 ' 00:06:12.715 03:06:16 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:12.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.715 --rc genhtml_branch_coverage=1 00:06:12.715 --rc genhtml_function_coverage=1 00:06:12.715 --rc genhtml_legend=1 00:06:12.715 --rc geninfo_all_blocks=1 00:06:12.715 --rc geninfo_unexecuted_blocks=1 00:06:12.715 00:06:12.715 ' 00:06:12.715 03:06:16 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:12.715 03:06:16 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:12.715 03:06:16 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:12.715 03:06:16 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:12.715 03:06:16 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:12.715 03:06:16 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:12.715 03:06:16 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:12.715 03:06:16 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:12.715 03:06:16 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:12.715 03:06:16 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:12.715 03:06:16 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:12.715 03:06:16 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:12.715 03:06:16 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:40b5cd40-24b0-458e-bc66-c7aa18c725f1 00:06:12.715 03:06:16 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=40b5cd40-24b0-458e-bc66-c7aa18c725f1 00:06:12.715 03:06:16 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:12.715 03:06:16 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:12.715 03:06:16 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:12.715 03:06:16 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:12.715 03:06:16 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:12.715 03:06:16 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:12.715 03:06:16 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:12.715 03:06:16 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.715 03:06:16 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.715 03:06:16 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.715 03:06:16 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.715 03:06:16 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.715 03:06:16 json_config -- paths/export.sh@5 -- # export PATH 00:06:12.715 03:06:16 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.715 03:06:16 json_config -- nvmf/common.sh@51 -- # : 0 00:06:12.715 03:06:16 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:12.715 03:06:16 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:12.715 03:06:16 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:12.715 03:06:16 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:12.715 03:06:16 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:12.715 03:06:16 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:12.716 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:12.716 03:06:16 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:12.716 03:06:16 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:12.716 03:06:16 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:12.716 03:06:16 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:12.716 03:06:16 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:12.716 03:06:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:12.716 03:06:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:12.716 03:06:16 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:12.716 03:06:16 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:12.716 WARNING: No tests are enabled so not running JSON configuration tests 00:06:12.716 03:06:16 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:12.716 00:06:12.716 real 0m0.229s 00:06:12.716 user 0m0.140s 00:06:12.716 sys 0m0.094s 00:06:12.716 03:06:16 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.716 03:06:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.716 ************************************ 00:06:12.716 END TEST json_config 00:06:12.716 ************************************ 00:06:12.716 03:06:16 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:12.716 03:06:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.716 03:06:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.716 03:06:16 -- common/autotest_common.sh@10 -- # set +x 00:06:12.716 ************************************ 00:06:12.716 START TEST json_config_extra_key 00:06:12.716 ************************************ 00:06:12.716 03:06:16 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:12.977 03:06:16 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:12.977 03:06:16 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:12.977 03:06:16 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:12.977 03:06:16 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:12.977 03:06:16 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.977 03:06:16 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:12.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.977 --rc genhtml_branch_coverage=1 00:06:12.977 --rc genhtml_function_coverage=1 00:06:12.977 --rc genhtml_legend=1 00:06:12.977 --rc geninfo_all_blocks=1 00:06:12.977 --rc geninfo_unexecuted_blocks=1 00:06:12.977 00:06:12.977 ' 00:06:12.977 03:06:16 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:12.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.977 --rc genhtml_branch_coverage=1 00:06:12.977 --rc genhtml_function_coverage=1 00:06:12.977 --rc genhtml_legend=1 00:06:12.977 --rc geninfo_all_blocks=1 00:06:12.977 --rc geninfo_unexecuted_blocks=1 00:06:12.977 00:06:12.977 ' 00:06:12.977 03:06:16 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:12.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.977 --rc genhtml_branch_coverage=1 00:06:12.977 --rc genhtml_function_coverage=1 00:06:12.977 --rc genhtml_legend=1 00:06:12.977 --rc geninfo_all_blocks=1 00:06:12.977 --rc geninfo_unexecuted_blocks=1 00:06:12.977 00:06:12.977 ' 00:06:12.977 03:06:16 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:12.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.977 --rc genhtml_branch_coverage=1 00:06:12.977 --rc genhtml_function_coverage=1 00:06:12.977 --rc genhtml_legend=1 00:06:12.977 --rc geninfo_all_blocks=1 00:06:12.977 --rc geninfo_unexecuted_blocks=1 00:06:12.977 00:06:12.977 ' 00:06:12.977 03:06:16 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:12.977 03:06:16 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:12.977 03:06:16 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:12.977 03:06:16 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:12.977 03:06:16 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:12.977 03:06:16 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:12.977 03:06:16 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:12.977 03:06:16 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:12.977 03:06:16 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:12.977 03:06:16 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:12.977 03:06:16 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:12.977 03:06:16 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:12.977 03:06:16 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:40b5cd40-24b0-458e-bc66-c7aa18c725f1 00:06:12.977 03:06:16 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=40b5cd40-24b0-458e-bc66-c7aa18c725f1 00:06:12.977 03:06:16 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:12.977 03:06:16 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:12.977 03:06:16 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:12.977 03:06:16 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:12.977 03:06:16 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.977 03:06:16 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.977 03:06:16 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.977 03:06:16 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.977 03:06:16 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.977 03:06:16 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:12.977 03:06:16 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.977 03:06:16 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:12.977 03:06:16 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:12.977 03:06:16 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:12.977 03:06:16 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:12.977 03:06:16 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:12.977 03:06:16 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:12.977 03:06:16 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:12.977 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:12.977 03:06:16 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:12.977 03:06:16 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:12.977 03:06:16 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:12.977 03:06:16 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:12.977 03:06:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:12.977 03:06:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:12.977 03:06:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:12.977 03:06:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:12.977 03:06:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:12.978 03:06:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:12.978 03:06:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:12.978 03:06:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:12.978 03:06:16 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:12.978 03:06:16 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:12.978 INFO: launching applications... 00:06:12.978 03:06:16 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:12.978 03:06:16 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:12.978 03:06:16 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:12.978 03:06:16 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:12.978 03:06:16 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:12.978 03:06:16 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:12.978 03:06:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:12.978 03:06:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:12.978 03:06:16 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69879 00:06:12.978 03:06:16 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:12.978 Waiting for target to run... 00:06:12.978 03:06:16 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69879 /var/tmp/spdk_tgt.sock 00:06:12.978 03:06:16 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 69879 ']' 00:06:12.978 03:06:16 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:12.978 03:06:16 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.978 03:06:16 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:12.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:12.978 03:06:16 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:12.978 03:06:16 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.978 03:06:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:13.238 [2024-11-18 03:06:16.569842] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:13.238 [2024-11-18 03:06:16.570081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69879 ] 00:06:13.497 [2024-11-18 03:06:16.947452] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.497 [2024-11-18 03:06:16.977947] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.070 00:06:14.070 INFO: shutting down applications... 00:06:14.070 03:06:17 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.070 03:06:17 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:14.071 03:06:17 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:14.071 03:06:17 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:14.071 03:06:17 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:14.071 03:06:17 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:14.071 03:06:17 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:14.071 03:06:17 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69879 ]] 00:06:14.071 03:06:17 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69879 00:06:14.071 03:06:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:14.071 03:06:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:14.071 03:06:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69879 00:06:14.071 03:06:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:14.642 03:06:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:14.642 03:06:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:14.642 03:06:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69879 00:06:14.642 03:06:17 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:14.642 03:06:17 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:14.642 03:06:17 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:14.642 03:06:17 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:14.642 SPDK target shutdown done 00:06:14.642 03:06:17 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:14.642 Success 00:06:14.642 00:06:14.642 real 0m1.668s 00:06:14.642 user 0m1.404s 00:06:14.642 sys 0m0.475s 00:06:14.642 03:06:17 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.642 03:06:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:14.642 ************************************ 00:06:14.642 END TEST json_config_extra_key 00:06:14.642 ************************************ 00:06:14.642 03:06:17 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:14.642 03:06:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.642 03:06:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.642 03:06:17 -- common/autotest_common.sh@10 -- # set +x 00:06:14.642 ************************************ 00:06:14.642 START TEST alias_rpc 00:06:14.642 ************************************ 00:06:14.642 03:06:17 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:14.642 * Looking for test storage... 00:06:14.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:14.642 03:06:18 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:14.642 03:06:18 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:14.642 03:06:18 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:14.642 03:06:18 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:14.642 03:06:18 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.642 03:06:18 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.642 03:06:18 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.642 03:06:18 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.642 03:06:18 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.642 03:06:18 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.642 03:06:18 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.642 03:06:18 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.642 03:06:18 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.642 03:06:18 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.642 03:06:18 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.642 03:06:18 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:14.642 03:06:18 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:14.642 03:06:18 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.642 03:06:18 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.642 03:06:18 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:14.642 03:06:18 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:14.642 03:06:18 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.642 03:06:18 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:14.642 03:06:18 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.642 03:06:18 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:14.642 03:06:18 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:14.642 03:06:18 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.642 03:06:18 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:14.642 03:06:18 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.642 03:06:18 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.643 03:06:18 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.643 03:06:18 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:14.643 03:06:18 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.643 03:06:18 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:14.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.643 --rc genhtml_branch_coverage=1 00:06:14.643 --rc genhtml_function_coverage=1 00:06:14.643 --rc genhtml_legend=1 00:06:14.643 --rc geninfo_all_blocks=1 00:06:14.643 --rc geninfo_unexecuted_blocks=1 00:06:14.643 00:06:14.643 ' 00:06:14.643 03:06:18 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:14.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.643 --rc genhtml_branch_coverage=1 00:06:14.643 --rc genhtml_function_coverage=1 00:06:14.643 --rc genhtml_legend=1 00:06:14.643 --rc geninfo_all_blocks=1 00:06:14.643 --rc geninfo_unexecuted_blocks=1 00:06:14.643 00:06:14.643 ' 00:06:14.643 03:06:18 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:14.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.643 --rc genhtml_branch_coverage=1 00:06:14.643 --rc genhtml_function_coverage=1 00:06:14.643 --rc genhtml_legend=1 00:06:14.643 --rc geninfo_all_blocks=1 00:06:14.643 --rc geninfo_unexecuted_blocks=1 00:06:14.643 00:06:14.643 ' 00:06:14.643 03:06:18 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:14.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.643 --rc genhtml_branch_coverage=1 00:06:14.643 --rc genhtml_function_coverage=1 00:06:14.643 --rc genhtml_legend=1 00:06:14.643 --rc geninfo_all_blocks=1 00:06:14.643 --rc geninfo_unexecuted_blocks=1 00:06:14.643 00:06:14.643 ' 00:06:14.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.643 03:06:18 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:14.643 03:06:18 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69958 00:06:14.643 03:06:18 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:14.643 03:06:18 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69958 00:06:14.643 03:06:18 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 69958 ']' 00:06:14.643 03:06:18 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.643 03:06:18 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.643 03:06:18 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.643 03:06:18 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.643 03:06:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.903 [2024-11-18 03:06:18.291675] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:14.903 [2024-11-18 03:06:18.292293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69958 ] 00:06:14.903 [2024-11-18 03:06:18.453045] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.164 [2024-11-18 03:06:18.503423] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.733 03:06:19 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.733 03:06:19 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:15.733 03:06:19 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:15.993 03:06:19 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69958 00:06:15.993 03:06:19 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 69958 ']' 00:06:15.993 03:06:19 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 69958 00:06:15.993 03:06:19 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:15.993 03:06:19 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.993 03:06:19 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69958 00:06:15.993 03:06:19 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.993 03:06:19 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.993 03:06:19 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69958' 00:06:15.993 killing process with pid 69958 00:06:15.993 03:06:19 alias_rpc -- common/autotest_common.sh@969 -- # kill 69958 00:06:15.993 03:06:19 alias_rpc -- common/autotest_common.sh@974 -- # wait 69958 00:06:16.252 00:06:16.252 real 0m1.834s 00:06:16.252 user 0m1.910s 00:06:16.252 sys 0m0.491s 00:06:16.252 03:06:19 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.252 03:06:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.252 ************************************ 00:06:16.252 END TEST alias_rpc 00:06:16.252 ************************************ 00:06:16.513 03:06:19 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:16.513 03:06:19 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:16.513 03:06:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.513 03:06:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.513 03:06:19 -- common/autotest_common.sh@10 -- # set +x 00:06:16.513 ************************************ 00:06:16.513 START TEST spdkcli_tcp 00:06:16.513 ************************************ 00:06:16.513 03:06:19 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:16.513 * Looking for test storage... 00:06:16.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:16.513 03:06:19 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:16.513 03:06:20 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:16.513 03:06:20 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:16.513 03:06:20 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:16.513 03:06:20 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.513 03:06:20 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.513 03:06:20 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.513 03:06:20 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.513 03:06:20 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.513 03:06:20 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.513 03:06:20 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.513 03:06:20 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.513 03:06:20 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.513 03:06:20 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.513 03:06:20 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.513 03:06:20 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:16.513 03:06:20 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:16.513 03:06:20 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.513 03:06:20 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.513 03:06:20 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:16.513 03:06:20 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:16.513 03:06:20 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.513 03:06:20 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:16.513 03:06:20 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.774 03:06:20 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:16.774 03:06:20 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:16.774 03:06:20 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.774 03:06:20 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:16.774 03:06:20 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.774 03:06:20 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.774 03:06:20 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.774 03:06:20 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:16.774 03:06:20 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.774 03:06:20 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:16.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.774 --rc genhtml_branch_coverage=1 00:06:16.774 --rc genhtml_function_coverage=1 00:06:16.774 --rc genhtml_legend=1 00:06:16.774 --rc geninfo_all_blocks=1 00:06:16.774 --rc geninfo_unexecuted_blocks=1 00:06:16.774 00:06:16.774 ' 00:06:16.774 03:06:20 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:16.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.774 --rc genhtml_branch_coverage=1 00:06:16.774 --rc genhtml_function_coverage=1 00:06:16.774 --rc genhtml_legend=1 00:06:16.774 --rc geninfo_all_blocks=1 00:06:16.774 --rc geninfo_unexecuted_blocks=1 00:06:16.774 00:06:16.774 ' 00:06:16.774 03:06:20 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:16.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.774 --rc genhtml_branch_coverage=1 00:06:16.774 --rc genhtml_function_coverage=1 00:06:16.774 --rc genhtml_legend=1 00:06:16.774 --rc geninfo_all_blocks=1 00:06:16.774 --rc geninfo_unexecuted_blocks=1 00:06:16.774 00:06:16.774 ' 00:06:16.774 03:06:20 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:16.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.774 --rc genhtml_branch_coverage=1 00:06:16.774 --rc genhtml_function_coverage=1 00:06:16.774 --rc genhtml_legend=1 00:06:16.774 --rc geninfo_all_blocks=1 00:06:16.774 --rc geninfo_unexecuted_blocks=1 00:06:16.774 00:06:16.774 ' 00:06:16.774 03:06:20 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:16.774 03:06:20 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:16.774 03:06:20 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:16.774 03:06:20 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:16.774 03:06:20 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:16.774 03:06:20 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:16.774 03:06:20 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:16.774 03:06:20 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:16.774 03:06:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:16.774 03:06:20 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70043 00:06:16.774 03:06:20 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70043 00:06:16.774 03:06:20 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:16.774 03:06:20 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 70043 ']' 00:06:16.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.774 03:06:20 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.774 03:06:20 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.774 03:06:20 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.774 03:06:20 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.774 03:06:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:16.774 [2024-11-18 03:06:20.196336] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:16.774 [2024-11-18 03:06:20.196548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70043 ] 00:06:17.034 [2024-11-18 03:06:20.358589] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.034 [2024-11-18 03:06:20.409714] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.034 [2024-11-18 03:06:20.409818] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.602 03:06:21 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.602 03:06:21 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:17.602 03:06:21 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70060 00:06:17.602 03:06:21 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:17.602 03:06:21 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:17.863 [ 00:06:17.863 "bdev_malloc_delete", 00:06:17.863 "bdev_malloc_create", 00:06:17.863 "bdev_null_resize", 00:06:17.863 "bdev_null_delete", 00:06:17.863 "bdev_null_create", 00:06:17.863 "bdev_nvme_cuse_unregister", 00:06:17.863 "bdev_nvme_cuse_register", 00:06:17.863 "bdev_opal_new_user", 00:06:17.863 "bdev_opal_set_lock_state", 00:06:17.863 "bdev_opal_delete", 00:06:17.863 "bdev_opal_get_info", 00:06:17.863 "bdev_opal_create", 00:06:17.863 "bdev_nvme_opal_revert", 00:06:17.863 "bdev_nvme_opal_init", 00:06:17.863 "bdev_nvme_send_cmd", 00:06:17.863 "bdev_nvme_set_keys", 00:06:17.863 "bdev_nvme_get_path_iostat", 00:06:17.863 "bdev_nvme_get_mdns_discovery_info", 00:06:17.863 "bdev_nvme_stop_mdns_discovery", 00:06:17.863 "bdev_nvme_start_mdns_discovery", 00:06:17.863 "bdev_nvme_set_multipath_policy", 00:06:17.863 "bdev_nvme_set_preferred_path", 00:06:17.863 "bdev_nvme_get_io_paths", 00:06:17.863 "bdev_nvme_remove_error_injection", 00:06:17.863 "bdev_nvme_add_error_injection", 00:06:17.863 "bdev_nvme_get_discovery_info", 00:06:17.863 "bdev_nvme_stop_discovery", 00:06:17.863 "bdev_nvme_start_discovery", 00:06:17.863 "bdev_nvme_get_controller_health_info", 00:06:17.863 "bdev_nvme_disable_controller", 00:06:17.863 "bdev_nvme_enable_controller", 00:06:17.863 "bdev_nvme_reset_controller", 00:06:17.863 "bdev_nvme_get_transport_statistics", 00:06:17.863 "bdev_nvme_apply_firmware", 00:06:17.863 "bdev_nvme_detach_controller", 00:06:17.863 "bdev_nvme_get_controllers", 00:06:17.863 "bdev_nvme_attach_controller", 00:06:17.863 "bdev_nvme_set_hotplug", 00:06:17.863 "bdev_nvme_set_options", 00:06:17.863 "bdev_passthru_delete", 00:06:17.863 "bdev_passthru_create", 00:06:17.863 "bdev_lvol_set_parent_bdev", 00:06:17.863 "bdev_lvol_set_parent", 00:06:17.863 "bdev_lvol_check_shallow_copy", 00:06:17.863 "bdev_lvol_start_shallow_copy", 00:06:17.863 "bdev_lvol_grow_lvstore", 00:06:17.863 "bdev_lvol_get_lvols", 00:06:17.863 "bdev_lvol_get_lvstores", 00:06:17.863 "bdev_lvol_delete", 00:06:17.863 "bdev_lvol_set_read_only", 00:06:17.863 "bdev_lvol_resize", 00:06:17.863 "bdev_lvol_decouple_parent", 00:06:17.863 "bdev_lvol_inflate", 00:06:17.863 "bdev_lvol_rename", 00:06:17.863 "bdev_lvol_clone_bdev", 00:06:17.863 "bdev_lvol_clone", 00:06:17.863 "bdev_lvol_snapshot", 00:06:17.863 "bdev_lvol_create", 00:06:17.863 "bdev_lvol_delete_lvstore", 00:06:17.863 "bdev_lvol_rename_lvstore", 00:06:17.863 "bdev_lvol_create_lvstore", 00:06:17.863 "bdev_raid_set_options", 00:06:17.863 "bdev_raid_remove_base_bdev", 00:06:17.863 "bdev_raid_add_base_bdev", 00:06:17.863 "bdev_raid_delete", 00:06:17.863 "bdev_raid_create", 00:06:17.863 "bdev_raid_get_bdevs", 00:06:17.863 "bdev_error_inject_error", 00:06:17.863 "bdev_error_delete", 00:06:17.863 "bdev_error_create", 00:06:17.863 "bdev_split_delete", 00:06:17.863 "bdev_split_create", 00:06:17.863 "bdev_delay_delete", 00:06:17.863 "bdev_delay_create", 00:06:17.863 "bdev_delay_update_latency", 00:06:17.863 "bdev_zone_block_delete", 00:06:17.863 "bdev_zone_block_create", 00:06:17.863 "blobfs_create", 00:06:17.863 "blobfs_detect", 00:06:17.863 "blobfs_set_cache_size", 00:06:17.863 "bdev_aio_delete", 00:06:17.863 "bdev_aio_rescan", 00:06:17.863 "bdev_aio_create", 00:06:17.863 "bdev_ftl_set_property", 00:06:17.863 "bdev_ftl_get_properties", 00:06:17.863 "bdev_ftl_get_stats", 00:06:17.864 "bdev_ftl_unmap", 00:06:17.864 "bdev_ftl_unload", 00:06:17.864 "bdev_ftl_delete", 00:06:17.864 "bdev_ftl_load", 00:06:17.864 "bdev_ftl_create", 00:06:17.864 "bdev_virtio_attach_controller", 00:06:17.864 "bdev_virtio_scsi_get_devices", 00:06:17.864 "bdev_virtio_detach_controller", 00:06:17.864 "bdev_virtio_blk_set_hotplug", 00:06:17.864 "bdev_iscsi_delete", 00:06:17.864 "bdev_iscsi_create", 00:06:17.864 "bdev_iscsi_set_options", 00:06:17.864 "accel_error_inject_error", 00:06:17.864 "ioat_scan_accel_module", 00:06:17.864 "dsa_scan_accel_module", 00:06:17.864 "iaa_scan_accel_module", 00:06:17.864 "keyring_file_remove_key", 00:06:17.864 "keyring_file_add_key", 00:06:17.864 "keyring_linux_set_options", 00:06:17.864 "fsdev_aio_delete", 00:06:17.864 "fsdev_aio_create", 00:06:17.864 "iscsi_get_histogram", 00:06:17.864 "iscsi_enable_histogram", 00:06:17.864 "iscsi_set_options", 00:06:17.864 "iscsi_get_auth_groups", 00:06:17.864 "iscsi_auth_group_remove_secret", 00:06:17.864 "iscsi_auth_group_add_secret", 00:06:17.864 "iscsi_delete_auth_group", 00:06:17.864 "iscsi_create_auth_group", 00:06:17.864 "iscsi_set_discovery_auth", 00:06:17.864 "iscsi_get_options", 00:06:17.864 "iscsi_target_node_request_logout", 00:06:17.864 "iscsi_target_node_set_redirect", 00:06:17.864 "iscsi_target_node_set_auth", 00:06:17.864 "iscsi_target_node_add_lun", 00:06:17.864 "iscsi_get_stats", 00:06:17.864 "iscsi_get_connections", 00:06:17.864 "iscsi_portal_group_set_auth", 00:06:17.864 "iscsi_start_portal_group", 00:06:17.864 "iscsi_delete_portal_group", 00:06:17.864 "iscsi_create_portal_group", 00:06:17.864 "iscsi_get_portal_groups", 00:06:17.864 "iscsi_delete_target_node", 00:06:17.864 "iscsi_target_node_remove_pg_ig_maps", 00:06:17.864 "iscsi_target_node_add_pg_ig_maps", 00:06:17.864 "iscsi_create_target_node", 00:06:17.864 "iscsi_get_target_nodes", 00:06:17.864 "iscsi_delete_initiator_group", 00:06:17.864 "iscsi_initiator_group_remove_initiators", 00:06:17.864 "iscsi_initiator_group_add_initiators", 00:06:17.864 "iscsi_create_initiator_group", 00:06:17.864 "iscsi_get_initiator_groups", 00:06:17.864 "nvmf_set_crdt", 00:06:17.864 "nvmf_set_config", 00:06:17.864 "nvmf_set_max_subsystems", 00:06:17.864 "nvmf_stop_mdns_prr", 00:06:17.864 "nvmf_publish_mdns_prr", 00:06:17.864 "nvmf_subsystem_get_listeners", 00:06:17.864 "nvmf_subsystem_get_qpairs", 00:06:17.864 "nvmf_subsystem_get_controllers", 00:06:17.864 "nvmf_get_stats", 00:06:17.864 "nvmf_get_transports", 00:06:17.864 "nvmf_create_transport", 00:06:17.864 "nvmf_get_targets", 00:06:17.864 "nvmf_delete_target", 00:06:17.864 "nvmf_create_target", 00:06:17.864 "nvmf_subsystem_allow_any_host", 00:06:17.864 "nvmf_subsystem_set_keys", 00:06:17.864 "nvmf_subsystem_remove_host", 00:06:17.864 "nvmf_subsystem_add_host", 00:06:17.864 "nvmf_ns_remove_host", 00:06:17.864 "nvmf_ns_add_host", 00:06:17.864 "nvmf_subsystem_remove_ns", 00:06:17.864 "nvmf_subsystem_set_ns_ana_group", 00:06:17.864 "nvmf_subsystem_add_ns", 00:06:17.864 "nvmf_subsystem_listener_set_ana_state", 00:06:17.864 "nvmf_discovery_get_referrals", 00:06:17.864 "nvmf_discovery_remove_referral", 00:06:17.864 "nvmf_discovery_add_referral", 00:06:17.864 "nvmf_subsystem_remove_listener", 00:06:17.864 "nvmf_subsystem_add_listener", 00:06:17.864 "nvmf_delete_subsystem", 00:06:17.864 "nvmf_create_subsystem", 00:06:17.864 "nvmf_get_subsystems", 00:06:17.864 "env_dpdk_get_mem_stats", 00:06:17.864 "nbd_get_disks", 00:06:17.864 "nbd_stop_disk", 00:06:17.864 "nbd_start_disk", 00:06:17.864 "ublk_recover_disk", 00:06:17.864 "ublk_get_disks", 00:06:17.864 "ublk_stop_disk", 00:06:17.864 "ublk_start_disk", 00:06:17.864 "ublk_destroy_target", 00:06:17.864 "ublk_create_target", 00:06:17.864 "virtio_blk_create_transport", 00:06:17.864 "virtio_blk_get_transports", 00:06:17.864 "vhost_controller_set_coalescing", 00:06:17.864 "vhost_get_controllers", 00:06:17.864 "vhost_delete_controller", 00:06:17.864 "vhost_create_blk_controller", 00:06:17.864 "vhost_scsi_controller_remove_target", 00:06:17.864 "vhost_scsi_controller_add_target", 00:06:17.864 "vhost_start_scsi_controller", 00:06:17.864 "vhost_create_scsi_controller", 00:06:17.864 "thread_set_cpumask", 00:06:17.864 "scheduler_set_options", 00:06:17.864 "framework_get_governor", 00:06:17.864 "framework_get_scheduler", 00:06:17.864 "framework_set_scheduler", 00:06:17.864 "framework_get_reactors", 00:06:17.864 "thread_get_io_channels", 00:06:17.864 "thread_get_pollers", 00:06:17.864 "thread_get_stats", 00:06:17.864 "framework_monitor_context_switch", 00:06:17.864 "spdk_kill_instance", 00:06:17.864 "log_enable_timestamps", 00:06:17.864 "log_get_flags", 00:06:17.864 "log_clear_flag", 00:06:17.864 "log_set_flag", 00:06:17.864 "log_get_level", 00:06:17.864 "log_set_level", 00:06:17.864 "log_get_print_level", 00:06:17.864 "log_set_print_level", 00:06:17.864 "framework_enable_cpumask_locks", 00:06:17.864 "framework_disable_cpumask_locks", 00:06:17.864 "framework_wait_init", 00:06:17.864 "framework_start_init", 00:06:17.864 "scsi_get_devices", 00:06:17.864 "bdev_get_histogram", 00:06:17.864 "bdev_enable_histogram", 00:06:17.864 "bdev_set_qos_limit", 00:06:17.864 "bdev_set_qd_sampling_period", 00:06:17.864 "bdev_get_bdevs", 00:06:17.864 "bdev_reset_iostat", 00:06:17.864 "bdev_get_iostat", 00:06:17.864 "bdev_examine", 00:06:17.864 "bdev_wait_for_examine", 00:06:17.864 "bdev_set_options", 00:06:17.864 "accel_get_stats", 00:06:17.864 "accel_set_options", 00:06:17.864 "accel_set_driver", 00:06:17.864 "accel_crypto_key_destroy", 00:06:17.864 "accel_crypto_keys_get", 00:06:17.864 "accel_crypto_key_create", 00:06:17.864 "accel_assign_opc", 00:06:17.864 "accel_get_module_info", 00:06:17.864 "accel_get_opc_assignments", 00:06:17.864 "vmd_rescan", 00:06:17.864 "vmd_remove_device", 00:06:17.864 "vmd_enable", 00:06:17.864 "sock_get_default_impl", 00:06:17.864 "sock_set_default_impl", 00:06:17.864 "sock_impl_set_options", 00:06:17.864 "sock_impl_get_options", 00:06:17.864 "iobuf_get_stats", 00:06:17.864 "iobuf_set_options", 00:06:17.864 "keyring_get_keys", 00:06:17.864 "framework_get_pci_devices", 00:06:17.864 "framework_get_config", 00:06:17.864 "framework_get_subsystems", 00:06:17.864 "fsdev_set_opts", 00:06:17.864 "fsdev_get_opts", 00:06:17.864 "trace_get_info", 00:06:17.864 "trace_get_tpoint_group_mask", 00:06:17.864 "trace_disable_tpoint_group", 00:06:17.864 "trace_enable_tpoint_group", 00:06:17.864 "trace_clear_tpoint_mask", 00:06:17.864 "trace_set_tpoint_mask", 00:06:17.864 "notify_get_notifications", 00:06:17.864 "notify_get_types", 00:06:17.864 "spdk_get_version", 00:06:17.864 "rpc_get_methods" 00:06:17.864 ] 00:06:17.864 03:06:21 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:17.864 03:06:21 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:17.864 03:06:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.864 03:06:21 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:17.864 03:06:21 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70043 00:06:17.864 03:06:21 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 70043 ']' 00:06:17.864 03:06:21 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 70043 00:06:17.864 03:06:21 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:17.865 03:06:21 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.865 03:06:21 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70043 00:06:17.865 killing process with pid 70043 00:06:17.865 03:06:21 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.865 03:06:21 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.865 03:06:21 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70043' 00:06:17.865 03:06:21 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 70043 00:06:17.865 03:06:21 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 70043 00:06:18.436 ************************************ 00:06:18.436 END TEST spdkcli_tcp 00:06:18.436 ************************************ 00:06:18.436 00:06:18.436 real 0m1.844s 00:06:18.436 user 0m3.069s 00:06:18.436 sys 0m0.566s 00:06:18.436 03:06:21 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.436 03:06:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.436 03:06:21 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:18.436 03:06:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.436 03:06:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.436 03:06:21 -- common/autotest_common.sh@10 -- # set +x 00:06:18.436 ************************************ 00:06:18.436 START TEST dpdk_mem_utility 00:06:18.436 ************************************ 00:06:18.436 03:06:21 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:18.436 * Looking for test storage... 00:06:18.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:18.436 03:06:21 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:18.436 03:06:21 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:18.436 03:06:21 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:18.436 03:06:21 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:18.436 03:06:21 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.436 03:06:21 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.436 03:06:21 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.436 03:06:21 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.436 03:06:21 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.436 03:06:21 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.436 03:06:21 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.436 03:06:21 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.436 03:06:21 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.436 03:06:21 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.436 03:06:21 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.436 03:06:21 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:18.436 03:06:21 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:18.436 03:06:21 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.436 03:06:21 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.436 03:06:21 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:18.436 03:06:21 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:18.436 03:06:21 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.436 03:06:21 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:18.436 03:06:21 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.436 03:06:21 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:18.436 03:06:21 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:18.436 03:06:22 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.436 03:06:22 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:18.436 03:06:22 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.436 03:06:22 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.436 03:06:22 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.436 03:06:22 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:18.436 03:06:22 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.436 03:06:22 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:18.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.436 --rc genhtml_branch_coverage=1 00:06:18.436 --rc genhtml_function_coverage=1 00:06:18.436 --rc genhtml_legend=1 00:06:18.436 --rc geninfo_all_blocks=1 00:06:18.436 --rc geninfo_unexecuted_blocks=1 00:06:18.436 00:06:18.436 ' 00:06:18.436 03:06:22 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:18.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.436 --rc genhtml_branch_coverage=1 00:06:18.436 --rc genhtml_function_coverage=1 00:06:18.436 --rc genhtml_legend=1 00:06:18.436 --rc geninfo_all_blocks=1 00:06:18.436 --rc geninfo_unexecuted_blocks=1 00:06:18.436 00:06:18.436 ' 00:06:18.436 03:06:22 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:18.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.436 --rc genhtml_branch_coverage=1 00:06:18.436 --rc genhtml_function_coverage=1 00:06:18.436 --rc genhtml_legend=1 00:06:18.436 --rc geninfo_all_blocks=1 00:06:18.436 --rc geninfo_unexecuted_blocks=1 00:06:18.436 00:06:18.436 ' 00:06:18.436 03:06:22 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:18.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.436 --rc genhtml_branch_coverage=1 00:06:18.436 --rc genhtml_function_coverage=1 00:06:18.436 --rc genhtml_legend=1 00:06:18.436 --rc geninfo_all_blocks=1 00:06:18.436 --rc geninfo_unexecuted_blocks=1 00:06:18.436 00:06:18.436 ' 00:06:18.436 03:06:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:18.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.697 03:06:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70143 00:06:18.697 03:06:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:18.697 03:06:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70143 00:06:18.697 03:06:22 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 70143 ']' 00:06:18.697 03:06:22 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.697 03:06:22 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.697 03:06:22 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.697 03:06:22 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.697 03:06:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:18.697 [2024-11-18 03:06:22.101061] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:18.697 [2024-11-18 03:06:22.101284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70143 ] 00:06:18.697 [2024-11-18 03:06:22.261060] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.956 [2024-11-18 03:06:22.311226] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.528 03:06:22 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.528 03:06:22 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:19.528 03:06:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:19.528 03:06:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:19.528 03:06:22 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.528 03:06:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:19.528 { 00:06:19.528 "filename": "/tmp/spdk_mem_dump.txt" 00:06:19.528 } 00:06:19.528 03:06:22 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.528 03:06:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:19.528 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:19.528 1 heaps totaling size 860.000000 MiB 00:06:19.528 size: 860.000000 MiB heap id: 0 00:06:19.528 end heaps---------- 00:06:19.528 9 mempools totaling size 642.649841 MiB 00:06:19.528 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:19.528 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:19.528 size: 92.545471 MiB name: bdev_io_70143 00:06:19.528 size: 51.011292 MiB name: evtpool_70143 00:06:19.528 size: 50.003479 MiB name: msgpool_70143 00:06:19.528 size: 36.509338 MiB name: fsdev_io_70143 00:06:19.528 size: 21.763794 MiB name: PDU_Pool 00:06:19.528 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:19.528 size: 0.026123 MiB name: Session_Pool 00:06:19.528 end mempools------- 00:06:19.528 6 memzones totaling size 4.142822 MiB 00:06:19.528 size: 1.000366 MiB name: RG_ring_0_70143 00:06:19.528 size: 1.000366 MiB name: RG_ring_1_70143 00:06:19.528 size: 1.000366 MiB name: RG_ring_4_70143 00:06:19.528 size: 1.000366 MiB name: RG_ring_5_70143 00:06:19.528 size: 0.125366 MiB name: RG_ring_2_70143 00:06:19.528 size: 0.015991 MiB name: RG_ring_3_70143 00:06:19.528 end memzones------- 00:06:19.528 03:06:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:19.528 heap id: 0 total size: 860.000000 MiB number of busy elements: 303 number of free elements: 16 00:06:19.528 list of free elements. size: 13.937256 MiB 00:06:19.528 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:19.528 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:19.528 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:19.528 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:19.528 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:19.528 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:19.528 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:19.528 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:19.528 element at address: 0x200000200000 with size: 0.834839 MiB 00:06:19.528 element at address: 0x20001d800000 with size: 0.568237 MiB 00:06:19.528 element at address: 0x20000d800000 with size: 0.489258 MiB 00:06:19.528 element at address: 0x200003e00000 with size: 0.488647 MiB 00:06:19.528 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:19.528 element at address: 0x200007000000 with size: 0.480469 MiB 00:06:19.528 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:06:19.528 element at address: 0x200003a00000 with size: 0.353027 MiB 00:06:19.528 list of standard malloc elements. size: 199.266052 MiB 00:06:19.528 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:19.528 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:19.528 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:19.528 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:19.528 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:19.528 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:19.528 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:19.528 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:19.528 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:19.528 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:19.528 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:19.528 element at address: 0x200003a5a600 with size: 0.000183 MiB 00:06:19.528 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:19.528 element at address: 0x200003a5eac0 with size: 0.000183 MiB 00:06:19.528 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:06:19.528 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:06:19.528 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:06:19.528 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:06:19.528 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:06:19.528 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:06:19.528 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:06:19.528 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:06:19.528 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:06:19.528 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:06:19.528 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:19.528 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:19.528 element at address: 0x200003aff880 with size: 0.000183 MiB 00:06:19.528 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:19.528 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:19.528 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20000707b000 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20000707b180 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20000707b240 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20000707b300 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20000707b480 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20000707b540 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:19.529 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:19.529 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d891780 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d891840 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d891900 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d892080 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d892140 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d892200 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d892380 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d892440 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d892500 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d892680 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d892740 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d892800 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d892980 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d893040 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d893100 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d893280 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d893340 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d893400 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d893580 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d893640 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d893700 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d893880 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d893940 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d894000 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d894180 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d894240 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d894300 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d894480 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d894540 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d894600 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d894780 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d894840 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d894900 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d895080 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d895140 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d895200 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:06:19.529 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:19.530 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:19.530 list of memzone associated elements. size: 646.796692 MiB 00:06:19.530 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:19.530 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:19.530 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:19.530 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:19.530 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:19.530 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70143_0 00:06:19.530 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:19.530 associated memzone info: size: 48.002930 MiB name: MP_evtpool_70143_0 00:06:19.530 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:19.530 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70143_0 00:06:19.530 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:19.530 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70143_0 00:06:19.530 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:19.530 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:19.530 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:19.530 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:19.530 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:19.530 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_70143 00:06:19.530 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:19.530 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70143 00:06:19.530 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:19.530 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70143 00:06:19.530 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:19.530 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:19.530 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:19.530 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:19.530 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:19.530 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:19.530 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:19.530 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:19.530 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:19.530 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70143 00:06:19.530 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:19.530 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70143 00:06:19.530 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:19.530 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70143 00:06:19.530 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:19.530 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70143 00:06:19.530 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:06:19.530 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70143 00:06:19.530 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:06:19.530 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70143 00:06:19.530 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:19.530 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:19.530 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:19.530 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:19.530 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:19.530 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:19.530 element at address: 0x200003a5eb80 with size: 0.125488 MiB 00:06:19.530 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70143 00:06:19.530 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:19.530 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:19.530 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:06:19.530 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:19.530 element at address: 0x200003a5a8c0 with size: 0.016113 MiB 00:06:19.530 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70143 00:06:19.530 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:06:19.530 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:19.530 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:19.530 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70143 00:06:19.530 element at address: 0x200003aff940 with size: 0.000305 MiB 00:06:19.530 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70143 00:06:19.530 element at address: 0x200003a5a6c0 with size: 0.000305 MiB 00:06:19.530 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70143 00:06:19.530 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:06:19.530 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:19.791 03:06:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:19.791 03:06:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70143 00:06:19.791 03:06:23 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 70143 ']' 00:06:19.791 03:06:23 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 70143 00:06:19.791 03:06:23 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:19.791 03:06:23 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:19.791 03:06:23 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70143 00:06:19.791 03:06:23 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:19.791 03:06:23 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:19.791 03:06:23 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70143' 00:06:19.791 killing process with pid 70143 00:06:19.791 03:06:23 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 70143 00:06:19.791 03:06:23 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 70143 00:06:20.050 00:06:20.050 real 0m1.743s 00:06:20.050 user 0m1.776s 00:06:20.050 sys 0m0.477s 00:06:20.050 03:06:23 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.050 03:06:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:20.050 ************************************ 00:06:20.050 END TEST dpdk_mem_utility 00:06:20.050 ************************************ 00:06:20.050 03:06:23 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:20.050 03:06:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.050 03:06:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.050 03:06:23 -- common/autotest_common.sh@10 -- # set +x 00:06:20.050 ************************************ 00:06:20.050 START TEST event 00:06:20.050 ************************************ 00:06:20.050 03:06:23 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:20.311 * Looking for test storage... 00:06:20.311 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:20.311 03:06:23 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:20.311 03:06:23 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:20.311 03:06:23 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:20.311 03:06:23 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:20.311 03:06:23 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.311 03:06:23 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.311 03:06:23 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.311 03:06:23 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.311 03:06:23 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.311 03:06:23 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.311 03:06:23 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.311 03:06:23 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.311 03:06:23 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.311 03:06:23 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.311 03:06:23 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.311 03:06:23 event -- scripts/common.sh@344 -- # case "$op" in 00:06:20.311 03:06:23 event -- scripts/common.sh@345 -- # : 1 00:06:20.311 03:06:23 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.311 03:06:23 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.311 03:06:23 event -- scripts/common.sh@365 -- # decimal 1 00:06:20.311 03:06:23 event -- scripts/common.sh@353 -- # local d=1 00:06:20.311 03:06:23 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.311 03:06:23 event -- scripts/common.sh@355 -- # echo 1 00:06:20.311 03:06:23 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.311 03:06:23 event -- scripts/common.sh@366 -- # decimal 2 00:06:20.311 03:06:23 event -- scripts/common.sh@353 -- # local d=2 00:06:20.311 03:06:23 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.311 03:06:23 event -- scripts/common.sh@355 -- # echo 2 00:06:20.311 03:06:23 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.311 03:06:23 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.311 03:06:23 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.311 03:06:23 event -- scripts/common.sh@368 -- # return 0 00:06:20.311 03:06:23 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.311 03:06:23 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:20.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.311 --rc genhtml_branch_coverage=1 00:06:20.311 --rc genhtml_function_coverage=1 00:06:20.311 --rc genhtml_legend=1 00:06:20.311 --rc geninfo_all_blocks=1 00:06:20.311 --rc geninfo_unexecuted_blocks=1 00:06:20.311 00:06:20.311 ' 00:06:20.311 03:06:23 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:20.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.311 --rc genhtml_branch_coverage=1 00:06:20.311 --rc genhtml_function_coverage=1 00:06:20.311 --rc genhtml_legend=1 00:06:20.311 --rc geninfo_all_blocks=1 00:06:20.311 --rc geninfo_unexecuted_blocks=1 00:06:20.311 00:06:20.311 ' 00:06:20.311 03:06:23 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:20.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.311 --rc genhtml_branch_coverage=1 00:06:20.311 --rc genhtml_function_coverage=1 00:06:20.311 --rc genhtml_legend=1 00:06:20.311 --rc geninfo_all_blocks=1 00:06:20.311 --rc geninfo_unexecuted_blocks=1 00:06:20.311 00:06:20.311 ' 00:06:20.311 03:06:23 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:20.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.311 --rc genhtml_branch_coverage=1 00:06:20.311 --rc genhtml_function_coverage=1 00:06:20.311 --rc genhtml_legend=1 00:06:20.311 --rc geninfo_all_blocks=1 00:06:20.311 --rc geninfo_unexecuted_blocks=1 00:06:20.311 00:06:20.311 ' 00:06:20.311 03:06:23 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:20.311 03:06:23 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:20.311 03:06:23 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:20.311 03:06:23 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:20.311 03:06:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.311 03:06:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.311 ************************************ 00:06:20.311 START TEST event_perf 00:06:20.312 ************************************ 00:06:20.312 03:06:23 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:20.312 Running I/O for 1 seconds...[2024-11-18 03:06:23.850305] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:20.312 [2024-11-18 03:06:23.850447] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70218 ] 00:06:20.571 [2024-11-18 03:06:24.009757] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:20.571 [2024-11-18 03:06:24.062494] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.571 [2024-11-18 03:06:24.062680] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.571 [2024-11-18 03:06:24.062754] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.571 Running I/O for 1 seconds...[2024-11-18 03:06:24.062919] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.953 00:06:21.953 lcore 0: 192546 00:06:21.953 lcore 1: 192546 00:06:21.953 lcore 2: 192545 00:06:21.953 lcore 3: 192545 00:06:21.953 done. 00:06:21.953 00:06:21.953 real 0m1.349s 00:06:21.953 user 0m4.116s 00:06:21.953 sys 0m0.113s 00:06:21.953 ************************************ 00:06:21.953 END TEST event_perf 00:06:21.953 ************************************ 00:06:21.953 03:06:25 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.953 03:06:25 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:21.953 03:06:25 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:21.953 03:06:25 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:21.953 03:06:25 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.953 03:06:25 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.953 ************************************ 00:06:21.953 START TEST event_reactor 00:06:21.953 ************************************ 00:06:21.953 03:06:25 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:21.953 [2024-11-18 03:06:25.266109] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:21.953 [2024-11-18 03:06:25.266259] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70263 ] 00:06:21.953 [2024-11-18 03:06:25.424695] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.953 [2024-11-18 03:06:25.474672] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.350 test_start 00:06:23.350 oneshot 00:06:23.350 tick 100 00:06:23.350 tick 100 00:06:23.350 tick 250 00:06:23.350 tick 100 00:06:23.350 tick 100 00:06:23.350 tick 100 00:06:23.350 tick 250 00:06:23.350 tick 500 00:06:23.350 tick 100 00:06:23.350 tick 100 00:06:23.350 tick 250 00:06:23.350 tick 100 00:06:23.350 tick 100 00:06:23.350 test_end 00:06:23.350 ************************************ 00:06:23.350 END TEST event_reactor 00:06:23.350 ************************************ 00:06:23.350 00:06:23.350 real 0m1.343s 00:06:23.350 user 0m1.136s 00:06:23.350 sys 0m0.100s 00:06:23.350 03:06:26 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.350 03:06:26 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:23.350 03:06:26 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:23.350 03:06:26 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:23.350 03:06:26 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.350 03:06:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.350 ************************************ 00:06:23.350 START TEST event_reactor_perf 00:06:23.350 ************************************ 00:06:23.350 03:06:26 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:23.350 [2024-11-18 03:06:26.671857] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:23.350 [2024-11-18 03:06:26.672121] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70294 ] 00:06:23.350 [2024-11-18 03:06:26.830926] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.350 [2024-11-18 03:06:26.880688] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.731 test_start 00:06:24.731 test_end 00:06:24.731 Performance: 381372 events per second 00:06:24.731 00:06:24.731 real 0m1.341s 00:06:24.731 user 0m1.135s 00:06:24.731 sys 0m0.098s 00:06:24.731 03:06:27 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.731 03:06:27 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.731 ************************************ 00:06:24.731 END TEST event_reactor_perf 00:06:24.731 ************************************ 00:06:24.731 03:06:28 event -- event/event.sh@49 -- # uname -s 00:06:24.731 03:06:28 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:24.731 03:06:28 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:24.731 03:06:28 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.731 03:06:28 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.731 03:06:28 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.731 ************************************ 00:06:24.731 START TEST event_scheduler 00:06:24.731 ************************************ 00:06:24.731 03:06:28 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:24.731 * Looking for test storage... 00:06:24.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:24.731 03:06:28 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:24.731 03:06:28 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:24.731 03:06:28 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:24.731 03:06:28 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:24.731 03:06:28 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.731 03:06:28 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.731 03:06:28 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.731 03:06:28 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.731 03:06:28 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.731 03:06:28 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.731 03:06:28 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.731 03:06:28 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.731 03:06:28 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.731 03:06:28 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.731 03:06:28 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.731 03:06:28 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:24.731 03:06:28 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:24.731 03:06:28 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.731 03:06:28 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.731 03:06:28 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:24.731 03:06:28 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:24.731 03:06:28 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.731 03:06:28 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:24.731 03:06:28 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.731 03:06:28 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:24.731 03:06:28 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:24.731 03:06:28 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.731 03:06:28 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:24.731 03:06:28 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.731 03:06:28 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.731 03:06:28 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.731 03:06:28 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:24.731 03:06:28 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.731 03:06:28 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:24.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.731 --rc genhtml_branch_coverage=1 00:06:24.731 --rc genhtml_function_coverage=1 00:06:24.731 --rc genhtml_legend=1 00:06:24.731 --rc geninfo_all_blocks=1 00:06:24.731 --rc geninfo_unexecuted_blocks=1 00:06:24.731 00:06:24.731 ' 00:06:24.731 03:06:28 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:24.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.731 --rc genhtml_branch_coverage=1 00:06:24.731 --rc genhtml_function_coverage=1 00:06:24.731 --rc genhtml_legend=1 00:06:24.731 --rc geninfo_all_blocks=1 00:06:24.731 --rc geninfo_unexecuted_blocks=1 00:06:24.731 00:06:24.731 ' 00:06:24.731 03:06:28 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:24.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.731 --rc genhtml_branch_coverage=1 00:06:24.731 --rc genhtml_function_coverage=1 00:06:24.731 --rc genhtml_legend=1 00:06:24.731 --rc geninfo_all_blocks=1 00:06:24.731 --rc geninfo_unexecuted_blocks=1 00:06:24.731 00:06:24.731 ' 00:06:24.731 03:06:28 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:24.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.731 --rc genhtml_branch_coverage=1 00:06:24.731 --rc genhtml_function_coverage=1 00:06:24.731 --rc genhtml_legend=1 00:06:24.731 --rc geninfo_all_blocks=1 00:06:24.731 --rc geninfo_unexecuted_blocks=1 00:06:24.731 00:06:24.731 ' 00:06:24.731 03:06:28 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:24.731 03:06:28 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70370 00:06:24.731 03:06:28 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:24.731 03:06:28 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70370 00:06:24.731 03:06:28 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70370 ']' 00:06:24.731 03:06:28 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:24.731 03:06:28 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.731 03:06:28 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.731 03:06:28 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.731 03:06:28 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.731 03:06:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.991 [2024-11-18 03:06:28.343340] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:24.991 [2024-11-18 03:06:28.343548] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70370 ] 00:06:24.991 [2024-11-18 03:06:28.504592] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:24.991 [2024-11-18 03:06:28.558049] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.991 [2024-11-18 03:06:28.558175] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.991 [2024-11-18 03:06:28.558227] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.991 [2024-11-18 03:06:28.558290] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:25.930 03:06:29 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.930 03:06:29 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:25.930 03:06:29 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:25.930 03:06:29 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.930 03:06:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:25.930 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:25.930 POWER: Cannot set governor of lcore 0 to userspace 00:06:25.930 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:25.930 POWER: Cannot set governor of lcore 0 to performance 00:06:25.930 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:25.930 POWER: Cannot set governor of lcore 0 to userspace 00:06:25.930 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:25.930 POWER: Cannot set governor of lcore 0 to userspace 00:06:25.930 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:25.930 POWER: Unable to set Power Management Environment for lcore 0 00:06:25.930 [2024-11-18 03:06:29.231054] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:25.930 [2024-11-18 03:06:29.231107] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:25.930 [2024-11-18 03:06:29.231151] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:25.930 [2024-11-18 03:06:29.231197] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:25.930 [2024-11-18 03:06:29.231233] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:25.930 [2024-11-18 03:06:29.231295] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:25.930 03:06:29 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.930 03:06:29 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:25.930 03:06:29 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.930 03:06:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:25.930 [2024-11-18 03:06:29.307898] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:25.930 03:06:29 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.930 03:06:29 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:25.930 03:06:29 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.930 03:06:29 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.930 03:06:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:25.930 ************************************ 00:06:25.930 START TEST scheduler_create_thread 00:06:25.930 ************************************ 00:06:25.930 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:25.930 03:06:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:25.930 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.930 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.930 2 00:06:25.930 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.930 03:06:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.931 3 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.931 4 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.931 5 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.931 6 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.931 7 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.931 8 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.931 9 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.931 10 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.931 03:06:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.870 03:06:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.870 03:06:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:26.870 03:06:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.870 03:06:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.251 03:06:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.251 03:06:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:28.251 03:06:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:28.251 03:06:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.251 03:06:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.190 ************************************ 00:06:29.190 END TEST scheduler_create_thread 00:06:29.190 ************************************ 00:06:29.190 03:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.190 00:06:29.190 real 0m3.368s 00:06:29.190 user 0m0.029s 00:06:29.190 sys 0m0.005s 00:06:29.190 03:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.190 03:06:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.190 03:06:32 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:29.190 03:06:32 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70370 00:06:29.190 03:06:32 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70370 ']' 00:06:29.190 03:06:32 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70370 00:06:29.190 03:06:32 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:29.190 03:06:32 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:29.190 03:06:32 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70370 00:06:29.450 killing process with pid 70370 00:06:29.450 03:06:32 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:29.450 03:06:32 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:29.450 03:06:32 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70370' 00:06:29.450 03:06:32 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70370 00:06:29.450 03:06:32 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70370 00:06:29.710 [2024-11-18 03:06:33.065578] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:29.969 00:06:29.969 real 0m5.307s 00:06:29.969 user 0m10.520s 00:06:29.969 sys 0m0.480s 00:06:29.969 ************************************ 00:06:29.969 END TEST event_scheduler 00:06:29.969 ************************************ 00:06:29.969 03:06:33 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.969 03:06:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:29.969 03:06:33 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:29.969 03:06:33 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:29.969 03:06:33 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.969 03:06:33 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.969 03:06:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.969 ************************************ 00:06:29.969 START TEST app_repeat 00:06:29.969 ************************************ 00:06:29.969 03:06:33 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:29.969 03:06:33 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.969 03:06:33 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.969 03:06:33 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:29.969 03:06:33 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.969 03:06:33 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:29.969 03:06:33 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:29.969 03:06:33 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:29.969 03:06:33 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70475 00:06:29.969 03:06:33 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:29.969 03:06:33 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:29.969 03:06:33 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70475' 00:06:29.969 Process app_repeat pid: 70475 00:06:29.969 03:06:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:29.969 spdk_app_start Round 0 00:06:29.969 03:06:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:29.969 03:06:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70475 /var/tmp/spdk-nbd.sock 00:06:29.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:29.969 03:06:33 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70475 ']' 00:06:29.969 03:06:33 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:29.969 03:06:33 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.969 03:06:33 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:29.969 03:06:33 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.969 03:06:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.969 [2024-11-18 03:06:33.478820] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:29.969 [2024-11-18 03:06:33.478950] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70475 ] 00:06:30.229 [2024-11-18 03:06:33.624202] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.229 [2024-11-18 03:06:33.675044] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.229 [2024-11-18 03:06:33.675152] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.167 03:06:34 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.167 03:06:34 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:31.167 03:06:34 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.167 Malloc0 00:06:31.167 03:06:34 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.426 Malloc1 00:06:31.426 03:06:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.426 03:06:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.426 03:06:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.426 03:06:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:31.426 03:06:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.426 03:06:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:31.426 03:06:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.426 03:06:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.426 03:06:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.426 03:06:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:31.426 03:06:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.426 03:06:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:31.426 03:06:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:31.426 03:06:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:31.426 03:06:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.426 03:06:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:31.685 /dev/nbd0 00:06:31.685 03:06:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:31.685 03:06:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:31.685 03:06:35 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:31.685 03:06:35 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:31.685 03:06:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:31.685 03:06:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:31.685 03:06:35 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:31.685 03:06:35 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:31.685 03:06:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:31.685 03:06:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:31.685 03:06:35 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.685 1+0 records in 00:06:31.685 1+0 records out 00:06:31.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000627829 s, 6.5 MB/s 00:06:31.685 03:06:35 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.685 03:06:35 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:31.685 03:06:35 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.685 03:06:35 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:31.685 03:06:35 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:31.685 03:06:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.685 03:06:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.685 03:06:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:31.945 /dev/nbd1 00:06:31.945 03:06:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:31.945 03:06:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:31.945 03:06:35 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:31.945 03:06:35 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:31.945 03:06:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:31.945 03:06:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:31.945 03:06:35 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:31.945 03:06:35 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:31.945 03:06:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:31.945 03:06:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:31.945 03:06:35 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.945 1+0 records in 00:06:31.945 1+0 records out 00:06:31.945 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299882 s, 13.7 MB/s 00:06:31.945 03:06:35 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.945 03:06:35 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:31.945 03:06:35 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.945 03:06:35 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:31.945 03:06:35 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:31.945 03:06:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.945 03:06:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.945 03:06:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.945 03:06:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.945 03:06:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:32.205 { 00:06:32.205 "nbd_device": "/dev/nbd0", 00:06:32.205 "bdev_name": "Malloc0" 00:06:32.205 }, 00:06:32.205 { 00:06:32.205 "nbd_device": "/dev/nbd1", 00:06:32.205 "bdev_name": "Malloc1" 00:06:32.205 } 00:06:32.205 ]' 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:32.205 { 00:06:32.205 "nbd_device": "/dev/nbd0", 00:06:32.205 "bdev_name": "Malloc0" 00:06:32.205 }, 00:06:32.205 { 00:06:32.205 "nbd_device": "/dev/nbd1", 00:06:32.205 "bdev_name": "Malloc1" 00:06:32.205 } 00:06:32.205 ]' 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:32.205 /dev/nbd1' 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:32.205 /dev/nbd1' 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:32.205 256+0 records in 00:06:32.205 256+0 records out 00:06:32.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013768 s, 76.2 MB/s 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:32.205 256+0 records in 00:06:32.205 256+0 records out 00:06:32.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246195 s, 42.6 MB/s 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:32.205 256+0 records in 00:06:32.205 256+0 records out 00:06:32.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0232842 s, 45.0 MB/s 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.205 03:06:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:32.465 03:06:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:32.465 03:06:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:32.465 03:06:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:32.465 03:06:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.465 03:06:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.465 03:06:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:32.465 03:06:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.465 03:06:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.465 03:06:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.465 03:06:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:32.725 03:06:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:32.725 03:06:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:32.725 03:06:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:32.725 03:06:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.725 03:06:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.725 03:06:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:32.725 03:06:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.725 03:06:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.725 03:06:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.725 03:06:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.725 03:06:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.985 03:06:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:32.985 03:06:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:32.985 03:06:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.985 03:06:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:32.985 03:06:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:32.985 03:06:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.985 03:06:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:32.985 03:06:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:32.985 03:06:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:32.985 03:06:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:32.985 03:06:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:32.985 03:06:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:32.985 03:06:36 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:33.296 03:06:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:33.580 [2024-11-18 03:06:36.921946] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.580 [2024-11-18 03:06:36.971971] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.580 [2024-11-18 03:06:36.972002] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.580 [2024-11-18 03:06:37.014051] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:33.580 [2024-11-18 03:06:37.014117] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:36.876 spdk_app_start Round 1 00:06:36.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:36.876 03:06:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:36.877 03:06:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:36.877 03:06:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70475 /var/tmp/spdk-nbd.sock 00:06:36.877 03:06:39 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70475 ']' 00:06:36.877 03:06:39 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:36.877 03:06:39 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.877 03:06:39 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:36.877 03:06:39 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.877 03:06:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:36.877 03:06:39 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.877 03:06:39 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:36.877 03:06:39 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.877 Malloc0 00:06:36.877 03:06:40 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.877 Malloc1 00:06:36.877 03:06:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.877 03:06:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.877 03:06:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.877 03:06:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:36.877 03:06:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.877 03:06:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:36.877 03:06:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.877 03:06:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.877 03:06:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.877 03:06:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:36.877 03:06:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.877 03:06:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:36.877 03:06:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:36.877 03:06:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:36.877 03:06:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.877 03:06:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:37.136 /dev/nbd0 00:06:37.136 03:06:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:37.136 03:06:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:37.136 03:06:40 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:37.136 03:06:40 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:37.136 03:06:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:37.136 03:06:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:37.136 03:06:40 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:37.136 03:06:40 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:37.136 03:06:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:37.136 03:06:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:37.136 03:06:40 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.136 1+0 records in 00:06:37.136 1+0 records out 00:06:37.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407182 s, 10.1 MB/s 00:06:37.136 03:06:40 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.136 03:06:40 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:37.136 03:06:40 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.136 03:06:40 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:37.136 03:06:40 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:37.136 03:06:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.136 03:06:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.136 03:06:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:37.395 /dev/nbd1 00:06:37.395 03:06:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:37.395 03:06:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:37.395 03:06:40 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:37.396 03:06:40 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:37.396 03:06:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:37.396 03:06:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:37.396 03:06:40 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:37.396 03:06:40 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:37.396 03:06:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:37.396 03:06:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:37.396 03:06:40 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.396 1+0 records in 00:06:37.396 1+0 records out 00:06:37.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405707 s, 10.1 MB/s 00:06:37.396 03:06:40 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.396 03:06:40 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:37.396 03:06:40 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.396 03:06:40 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:37.396 03:06:40 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:37.396 03:06:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.396 03:06:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.396 03:06:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.396 03:06:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.396 03:06:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.655 03:06:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:37.655 { 00:06:37.655 "nbd_device": "/dev/nbd0", 00:06:37.655 "bdev_name": "Malloc0" 00:06:37.655 }, 00:06:37.655 { 00:06:37.655 "nbd_device": "/dev/nbd1", 00:06:37.655 "bdev_name": "Malloc1" 00:06:37.655 } 00:06:37.655 ]' 00:06:37.655 03:06:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.655 03:06:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:37.655 { 00:06:37.655 "nbd_device": "/dev/nbd0", 00:06:37.655 "bdev_name": "Malloc0" 00:06:37.655 }, 00:06:37.655 { 00:06:37.655 "nbd_device": "/dev/nbd1", 00:06:37.655 "bdev_name": "Malloc1" 00:06:37.655 } 00:06:37.655 ]' 00:06:37.655 03:06:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:37.655 /dev/nbd1' 00:06:37.655 03:06:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:37.655 /dev/nbd1' 00:06:37.655 03:06:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.655 03:06:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:37.655 03:06:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:37.655 03:06:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:37.655 03:06:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:37.655 03:06:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:37.655 03:06:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.655 03:06:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.655 03:06:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:37.655 03:06:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:37.655 03:06:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:37.655 03:06:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:37.655 256+0 records in 00:06:37.655 256+0 records out 00:06:37.655 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136642 s, 76.7 MB/s 00:06:37.655 03:06:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.655 03:06:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:37.655 256+0 records in 00:06:37.655 256+0 records out 00:06:37.655 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0187176 s, 56.0 MB/s 00:06:37.655 03:06:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.655 03:06:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:37.915 256+0 records in 00:06:37.915 256+0 records out 00:06:37.915 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220259 s, 47.6 MB/s 00:06:37.915 03:06:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:37.915 03:06:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.915 03:06:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.915 03:06:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:37.915 03:06:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:37.915 03:06:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:37.915 03:06:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:37.915 03:06:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.915 03:06:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:37.915 03:06:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.915 03:06:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:37.915 03:06:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:37.915 03:06:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:37.915 03:06:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.915 03:06:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.915 03:06:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:37.915 03:06:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:37.915 03:06:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.915 03:06:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:38.175 03:06:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:38.175 03:06:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:38.175 03:06:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:38.175 03:06:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.175 03:06:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.175 03:06:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:38.175 03:06:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:38.175 03:06:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.175 03:06:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.175 03:06:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:38.175 03:06:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:38.175 03:06:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:38.175 03:06:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:38.175 03:06:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.175 03:06:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.175 03:06:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:38.175 03:06:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:38.175 03:06:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.175 03:06:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.175 03:06:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.175 03:06:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.434 03:06:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:38.434 03:06:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:38.434 03:06:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.434 03:06:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:38.434 03:06:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:38.434 03:06:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.434 03:06:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:38.434 03:06:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:38.434 03:06:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:38.434 03:06:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:38.434 03:06:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:38.434 03:06:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:38.434 03:06:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:38.693 03:06:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:38.952 [2024-11-18 03:06:42.392138] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:38.952 [2024-11-18 03:06:42.441507] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.952 [2024-11-18 03:06:42.441530] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.952 [2024-11-18 03:06:42.484185] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:38.952 [2024-11-18 03:06:42.484242] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:42.243 03:06:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:42.243 spdk_app_start Round 2 00:06:42.243 03:06:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:42.243 03:06:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70475 /var/tmp/spdk-nbd.sock 00:06:42.243 03:06:45 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70475 ']' 00:06:42.243 03:06:45 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:42.243 03:06:45 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:42.243 03:06:45 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:42.243 03:06:45 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.243 03:06:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:42.243 03:06:45 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.243 03:06:45 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:42.243 03:06:45 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:42.243 Malloc0 00:06:42.243 03:06:45 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:42.502 Malloc1 00:06:42.502 03:06:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:42.502 03:06:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.502 03:06:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:42.502 03:06:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:42.502 03:06:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.502 03:06:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:42.502 03:06:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:42.502 03:06:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.502 03:06:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:42.502 03:06:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:42.502 03:06:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.502 03:06:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:42.502 03:06:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:42.502 03:06:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:42.502 03:06:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:42.502 03:06:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:42.761 /dev/nbd0 00:06:42.761 03:06:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:42.761 03:06:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:42.761 03:06:46 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:42.761 03:06:46 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:42.761 03:06:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:42.761 03:06:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:42.761 03:06:46 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:42.761 03:06:46 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:42.761 03:06:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:42.761 03:06:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:42.761 03:06:46 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:42.761 1+0 records in 00:06:42.761 1+0 records out 00:06:42.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188751 s, 21.7 MB/s 00:06:42.761 03:06:46 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:42.761 03:06:46 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:42.761 03:06:46 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:42.761 03:06:46 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:42.761 03:06:46 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:42.761 03:06:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:42.761 03:06:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:42.761 03:06:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:43.032 /dev/nbd1 00:06:43.032 03:06:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:43.032 03:06:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:43.032 03:06:46 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:43.032 03:06:46 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:43.032 03:06:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:43.032 03:06:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:43.032 03:06:46 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:43.032 03:06:46 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:43.032 03:06:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:43.032 03:06:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:43.032 03:06:46 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:43.032 1+0 records in 00:06:43.032 1+0 records out 00:06:43.032 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038365 s, 10.7 MB/s 00:06:43.032 03:06:46 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:43.032 03:06:46 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:43.032 03:06:46 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:43.032 03:06:46 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:43.032 03:06:46 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:43.032 03:06:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.032 03:06:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.032 03:06:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:43.032 03:06:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.032 03:06:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:43.308 { 00:06:43.308 "nbd_device": "/dev/nbd0", 00:06:43.308 "bdev_name": "Malloc0" 00:06:43.308 }, 00:06:43.308 { 00:06:43.308 "nbd_device": "/dev/nbd1", 00:06:43.308 "bdev_name": "Malloc1" 00:06:43.308 } 00:06:43.308 ]' 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:43.308 { 00:06:43.308 "nbd_device": "/dev/nbd0", 00:06:43.308 "bdev_name": "Malloc0" 00:06:43.308 }, 00:06:43.308 { 00:06:43.308 "nbd_device": "/dev/nbd1", 00:06:43.308 "bdev_name": "Malloc1" 00:06:43.308 } 00:06:43.308 ]' 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:43.308 /dev/nbd1' 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:43.308 /dev/nbd1' 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:43.308 256+0 records in 00:06:43.308 256+0 records out 00:06:43.308 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137154 s, 76.5 MB/s 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:43.308 256+0 records in 00:06:43.308 256+0 records out 00:06:43.308 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0172073 s, 60.9 MB/s 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:43.308 256+0 records in 00:06:43.308 256+0 records out 00:06:43.308 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235824 s, 44.5 MB/s 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.308 03:06:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:43.569 03:06:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:43.569 03:06:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:43.569 03:06:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:43.569 03:06:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.569 03:06:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.569 03:06:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:43.569 03:06:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:43.569 03:06:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.569 03:06:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.569 03:06:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:43.829 03:06:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:43.829 03:06:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:43.829 03:06:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:43.829 03:06:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.829 03:06:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.829 03:06:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:43.829 03:06:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:43.829 03:06:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.829 03:06:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:43.829 03:06:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.829 03:06:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:44.088 03:06:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:44.088 03:06:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:44.088 03:06:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.088 03:06:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:44.088 03:06:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:44.088 03:06:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.088 03:06:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:44.088 03:06:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:44.088 03:06:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:44.088 03:06:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:44.088 03:06:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:44.088 03:06:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:44.088 03:06:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:44.350 03:06:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:44.612 [2024-11-18 03:06:47.929798] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:44.612 [2024-11-18 03:06:47.978926] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.612 [2024-11-18 03:06:47.978956] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.612 [2024-11-18 03:06:48.021357] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:44.612 [2024-11-18 03:06:48.021421] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:47.905 03:06:50 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70475 /var/tmp/spdk-nbd.sock 00:06:47.905 03:06:50 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70475 ']' 00:06:47.905 03:06:50 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:47.905 03:06:50 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:47.905 03:06:50 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:47.905 03:06:50 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.905 03:06:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:47.905 03:06:50 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.905 03:06:50 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:47.905 03:06:50 event.app_repeat -- event/event.sh@39 -- # killprocess 70475 00:06:47.905 03:06:50 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70475 ']' 00:06:47.905 03:06:50 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70475 00:06:47.905 03:06:50 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:47.905 03:06:51 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.905 03:06:51 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70475 00:06:47.905 killing process with pid 70475 00:06:47.905 03:06:51 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:47.905 03:06:51 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:47.905 03:06:51 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70475' 00:06:47.905 03:06:51 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70475 00:06:47.905 03:06:51 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70475 00:06:47.905 spdk_app_start is called in Round 0. 00:06:47.905 Shutdown signal received, stop current app iteration 00:06:47.905 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:47.905 spdk_app_start is called in Round 1. 00:06:47.905 Shutdown signal received, stop current app iteration 00:06:47.905 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:47.905 spdk_app_start is called in Round 2. 00:06:47.905 Shutdown signal received, stop current app iteration 00:06:47.905 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:47.905 spdk_app_start is called in Round 3. 00:06:47.905 Shutdown signal received, stop current app iteration 00:06:47.905 ************************************ 00:06:47.905 END TEST app_repeat 00:06:47.905 ************************************ 00:06:47.905 03:06:51 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:47.905 03:06:51 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:47.905 00:06:47.905 real 0m17.820s 00:06:47.905 user 0m39.378s 00:06:47.905 sys 0m2.787s 00:06:47.905 03:06:51 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.905 03:06:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:47.905 03:06:51 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:47.905 03:06:51 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:47.905 03:06:51 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.905 03:06:51 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.905 03:06:51 event -- common/autotest_common.sh@10 -- # set +x 00:06:47.905 ************************************ 00:06:47.905 START TEST cpu_locks 00:06:47.905 ************************************ 00:06:47.905 03:06:51 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:47.905 * Looking for test storage... 00:06:47.905 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:47.905 03:06:51 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:47.905 03:06:51 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:47.906 03:06:51 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:48.166 03:06:51 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:48.166 03:06:51 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.166 03:06:51 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.166 03:06:51 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.166 03:06:51 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.166 03:06:51 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.166 03:06:51 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.166 03:06:51 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.166 03:06:51 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.166 03:06:51 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.166 03:06:51 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.166 03:06:51 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.166 03:06:51 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:48.166 03:06:51 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:48.166 03:06:51 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.166 03:06:51 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.166 03:06:51 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:48.166 03:06:51 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:48.166 03:06:51 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.166 03:06:51 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:48.166 03:06:51 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.166 03:06:51 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:48.166 03:06:51 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:48.166 03:06:51 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.166 03:06:51 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:48.166 03:06:51 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.166 03:06:51 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.166 03:06:51 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.166 03:06:51 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:48.166 03:06:51 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.166 03:06:51 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:48.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.166 --rc genhtml_branch_coverage=1 00:06:48.166 --rc genhtml_function_coverage=1 00:06:48.166 --rc genhtml_legend=1 00:06:48.166 --rc geninfo_all_blocks=1 00:06:48.166 --rc geninfo_unexecuted_blocks=1 00:06:48.166 00:06:48.166 ' 00:06:48.166 03:06:51 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:48.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.166 --rc genhtml_branch_coverage=1 00:06:48.166 --rc genhtml_function_coverage=1 00:06:48.166 --rc genhtml_legend=1 00:06:48.166 --rc geninfo_all_blocks=1 00:06:48.166 --rc geninfo_unexecuted_blocks=1 00:06:48.166 00:06:48.166 ' 00:06:48.166 03:06:51 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:48.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.166 --rc genhtml_branch_coverage=1 00:06:48.166 --rc genhtml_function_coverage=1 00:06:48.166 --rc genhtml_legend=1 00:06:48.166 --rc geninfo_all_blocks=1 00:06:48.166 --rc geninfo_unexecuted_blocks=1 00:06:48.166 00:06:48.166 ' 00:06:48.166 03:06:51 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:48.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.166 --rc genhtml_branch_coverage=1 00:06:48.166 --rc genhtml_function_coverage=1 00:06:48.166 --rc genhtml_legend=1 00:06:48.166 --rc geninfo_all_blocks=1 00:06:48.166 --rc geninfo_unexecuted_blocks=1 00:06:48.166 00:06:48.166 ' 00:06:48.166 03:06:51 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:48.166 03:06:51 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:48.166 03:06:51 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:48.166 03:06:51 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:48.166 03:06:51 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.166 03:06:51 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.166 03:06:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.166 ************************************ 00:06:48.166 START TEST default_locks 00:06:48.166 ************************************ 00:06:48.166 03:06:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:48.166 03:06:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70903 00:06:48.166 03:06:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:48.166 03:06:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70903 00:06:48.166 03:06:51 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70903 ']' 00:06:48.166 03:06:51 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.166 03:06:51 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.166 03:06:51 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.166 03:06:51 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.166 03:06:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.166 [2024-11-18 03:06:51.639110] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:48.166 [2024-11-18 03:06:51.639343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70903 ] 00:06:48.426 [2024-11-18 03:06:51.798604] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.426 [2024-11-18 03:06:51.848782] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.996 03:06:52 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.996 03:06:52 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:48.996 03:06:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70903 00:06:48.996 03:06:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70903 00:06:48.996 03:06:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.256 03:06:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70903 00:06:49.256 03:06:52 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 70903 ']' 00:06:49.256 03:06:52 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 70903 00:06:49.256 03:06:52 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:49.256 03:06:52 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.256 03:06:52 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70903 00:06:49.256 03:06:52 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:49.256 03:06:52 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:49.256 03:06:52 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70903' 00:06:49.256 killing process with pid 70903 00:06:49.256 03:06:52 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 70903 00:06:49.256 03:06:52 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 70903 00:06:49.825 03:06:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70903 00:06:49.825 03:06:53 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:49.825 03:06:53 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70903 00:06:49.825 03:06:53 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:49.825 03:06:53 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:49.825 03:06:53 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:49.825 03:06:53 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:49.825 03:06:53 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 70903 00:06:49.825 03:06:53 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70903 ']' 00:06:49.825 03:06:53 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.825 03:06:53 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.825 03:06:53 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.825 03:06:53 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.825 03:06:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.825 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70903) - No such process 00:06:49.825 ERROR: process (pid: 70903) is no longer running 00:06:49.825 03:06:53 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.825 03:06:53 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:49.825 03:06:53 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:49.825 03:06:53 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:49.825 03:06:53 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:49.825 03:06:53 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:49.825 03:06:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:49.825 ************************************ 00:06:49.825 END TEST default_locks 00:06:49.825 ************************************ 00:06:49.825 03:06:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:49.825 03:06:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:49.825 03:06:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:49.825 00:06:49.825 real 0m1.559s 00:06:49.825 user 0m1.534s 00:06:49.825 sys 0m0.502s 00:06:49.825 03:06:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.825 03:06:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.825 03:06:53 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:49.825 03:06:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.825 03:06:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.825 03:06:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.825 ************************************ 00:06:49.825 START TEST default_locks_via_rpc 00:06:49.825 ************************************ 00:06:49.825 03:06:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:49.825 03:06:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70951 00:06:49.825 03:06:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70951 00:06:49.825 03:06:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:49.825 03:06:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70951 ']' 00:06:49.825 03:06:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.825 03:06:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.825 03:06:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.825 03:06:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.825 03:06:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.825 [2024-11-18 03:06:53.265563] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:49.825 [2024-11-18 03:06:53.265714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70951 ] 00:06:50.085 [2024-11-18 03:06:53.426644] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.085 [2024-11-18 03:06:53.476710] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.654 03:06:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.654 03:06:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:50.654 03:06:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:50.654 03:06:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.654 03:06:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.654 03:06:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.654 03:06:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:50.654 03:06:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:50.654 03:06:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:50.654 03:06:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:50.654 03:06:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:50.654 03:06:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.654 03:06:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.654 03:06:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.654 03:06:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70951 00:06:50.654 03:06:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70951 00:06:50.654 03:06:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:50.914 03:06:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70951 00:06:50.914 03:06:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 70951 ']' 00:06:50.914 03:06:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 70951 00:06:50.914 03:06:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:50.914 03:06:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.914 03:06:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70951 00:06:51.174 killing process with pid 70951 00:06:51.174 03:06:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.174 03:06:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.174 03:06:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70951' 00:06:51.174 03:06:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 70951 00:06:51.174 03:06:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 70951 00:06:51.434 ************************************ 00:06:51.434 END TEST default_locks_via_rpc 00:06:51.434 ************************************ 00:06:51.434 00:06:51.434 real 0m1.728s 00:06:51.434 user 0m1.703s 00:06:51.434 sys 0m0.592s 00:06:51.434 03:06:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.434 03:06:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.434 03:06:54 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:51.434 03:06:54 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.434 03:06:54 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.434 03:06:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.434 ************************************ 00:06:51.434 START TEST non_locking_app_on_locked_coremask 00:06:51.434 ************************************ 00:06:51.434 03:06:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:51.434 03:06:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71003 00:06:51.434 03:06:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:51.434 03:06:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71003 /var/tmp/spdk.sock 00:06:51.434 03:06:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71003 ']' 00:06:51.434 03:06:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.434 03:06:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.434 03:06:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.434 03:06:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.434 03:06:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.694 [2024-11-18 03:06:55.057346] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:51.694 [2024-11-18 03:06:55.057472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71003 ] 00:06:51.694 [2024-11-18 03:06:55.217129] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.694 [2024-11-18 03:06:55.267795] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.698 03:06:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.698 03:06:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:52.698 03:06:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:52.698 03:06:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71019 00:06:52.698 03:06:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71019 /var/tmp/spdk2.sock 00:06:52.698 03:06:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71019 ']' 00:06:52.698 03:06:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.698 03:06:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.698 03:06:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.698 03:06:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.698 03:06:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.698 [2024-11-18 03:06:55.975736] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:52.698 [2024-11-18 03:06:55.975967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71019 ] 00:06:52.698 [2024-11-18 03:06:56.127252] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:52.698 [2024-11-18 03:06:56.127347] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.698 [2024-11-18 03:06:56.229932] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.635 03:06:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.635 03:06:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:53.635 03:06:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71003 00:06:53.635 03:06:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71003 00:06:53.635 03:06:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.894 03:06:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71003 00:06:53.894 03:06:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71003 ']' 00:06:53.894 03:06:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71003 00:06:53.894 03:06:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:53.894 03:06:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.894 03:06:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71003 00:06:53.894 killing process with pid 71003 00:06:53.894 03:06:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.894 03:06:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.894 03:06:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71003' 00:06:53.894 03:06:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71003 00:06:53.894 03:06:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71003 00:06:54.830 03:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71019 00:06:54.830 03:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71019 ']' 00:06:54.830 03:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71019 00:06:54.830 03:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:54.830 03:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.830 03:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71019 00:06:54.830 killing process with pid 71019 00:06:54.830 03:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.830 03:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.830 03:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71019' 00:06:54.830 03:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71019 00:06:54.830 03:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71019 00:06:55.089 00:06:55.089 real 0m3.593s 00:06:55.089 user 0m3.805s 00:06:55.089 sys 0m1.080s 00:06:55.089 03:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.089 ************************************ 00:06:55.089 END TEST non_locking_app_on_locked_coremask 00:06:55.089 ************************************ 00:06:55.089 03:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.089 03:06:58 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:55.089 03:06:58 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.089 03:06:58 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.089 03:06:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.089 ************************************ 00:06:55.089 START TEST locking_app_on_unlocked_coremask 00:06:55.089 ************************************ 00:06:55.089 03:06:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:55.089 03:06:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71082 00:06:55.089 03:06:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71082 /var/tmp/spdk.sock 00:06:55.089 03:06:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:55.089 03:06:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71082 ']' 00:06:55.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.089 03:06:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.089 03:06:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.089 03:06:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.089 03:06:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.089 03:06:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.349 [2024-11-18 03:06:58.717735] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:55.349 [2024-11-18 03:06:58.717900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71082 ] 00:06:55.349 [2024-11-18 03:06:58.877581] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:55.349 [2024-11-18 03:06:58.877654] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.608 [2024-11-18 03:06:58.927625] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.174 03:06:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.174 03:06:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:56.174 03:06:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71094 00:06:56.174 03:06:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71094 /var/tmp/spdk2.sock 00:06:56.175 03:06:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:56.175 03:06:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71094 ']' 00:06:56.175 03:06:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.175 03:06:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.175 03:06:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.175 03:06:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.175 03:06:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.175 [2024-11-18 03:06:59.637198] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:56.175 [2024-11-18 03:06:59.637441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71094 ] 00:06:56.432 [2024-11-18 03:06:59.790146] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.432 [2024-11-18 03:06:59.888311] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.999 03:07:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.999 03:07:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:56.999 03:07:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71094 00:06:56.999 03:07:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71094 00:06:56.999 03:07:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.935 03:07:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71082 00:06:57.935 03:07:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71082 ']' 00:06:57.935 03:07:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71082 00:06:57.935 03:07:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:57.935 03:07:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.935 03:07:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71082 00:06:57.935 killing process with pid 71082 00:06:57.935 03:07:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.935 03:07:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.935 03:07:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71082' 00:06:57.935 03:07:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71082 00:06:57.935 03:07:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71082 00:06:58.503 03:07:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71094 00:06:58.503 03:07:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71094 ']' 00:06:58.503 03:07:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71094 00:06:58.503 03:07:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:58.503 03:07:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:58.503 03:07:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71094 00:06:58.763 killing process with pid 71094 00:06:58.763 03:07:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:58.763 03:07:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:58.763 03:07:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71094' 00:06:58.763 03:07:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71094 00:06:58.763 03:07:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71094 00:06:59.022 00:06:59.022 real 0m3.857s 00:06:59.022 user 0m4.117s 00:06:59.022 sys 0m1.150s 00:06:59.022 ************************************ 00:06:59.022 END TEST locking_app_on_unlocked_coremask 00:06:59.022 ************************************ 00:06:59.022 03:07:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.022 03:07:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.022 03:07:02 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:59.022 03:07:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:59.022 03:07:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.022 03:07:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.022 ************************************ 00:06:59.022 START TEST locking_app_on_locked_coremask 00:06:59.022 ************************************ 00:06:59.022 03:07:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:59.022 03:07:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71163 00:06:59.022 03:07:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:59.022 03:07:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71163 /var/tmp/spdk.sock 00:06:59.022 03:07:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71163 ']' 00:06:59.022 03:07:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.022 03:07:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:59.023 03:07:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.023 03:07:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:59.023 03:07:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.282 [2024-11-18 03:07:02.641993] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:59.282 [2024-11-18 03:07:02.642214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71163 ] 00:06:59.282 [2024-11-18 03:07:02.799120] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.282 [2024-11-18 03:07:02.849790] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.221 03:07:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.221 03:07:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:00.221 03:07:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71179 00:07:00.221 03:07:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71179 /var/tmp/spdk2.sock 00:07:00.221 03:07:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:00.221 03:07:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:00.221 03:07:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71179 /var/tmp/spdk2.sock 00:07:00.221 03:07:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:00.221 03:07:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.221 03:07:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:00.221 03:07:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.221 03:07:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71179 /var/tmp/spdk2.sock 00:07:00.221 03:07:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71179 ']' 00:07:00.221 03:07:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.221 03:07:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.221 03:07:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.221 03:07:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.221 03:07:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.221 [2024-11-18 03:07:03.560483] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:00.221 [2024-11-18 03:07:03.560702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71179 ] 00:07:00.221 [2024-11-18 03:07:03.712166] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71163 has claimed it. 00:07:00.221 [2024-11-18 03:07:03.712252] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:00.791 ERROR: process (pid: 71179) is no longer running 00:07:00.791 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71179) - No such process 00:07:00.791 03:07:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.791 03:07:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:00.791 03:07:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:00.791 03:07:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:00.791 03:07:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:00.791 03:07:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:00.791 03:07:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71163 00:07:00.791 03:07:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71163 00:07:00.791 03:07:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:01.362 03:07:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71163 00:07:01.362 03:07:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71163 ']' 00:07:01.362 03:07:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71163 00:07:01.362 03:07:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:01.362 03:07:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:01.362 03:07:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71163 00:07:01.362 03:07:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:01.362 03:07:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:01.362 03:07:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71163' 00:07:01.362 killing process with pid 71163 00:07:01.362 03:07:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71163 00:07:01.362 03:07:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71163 00:07:01.622 00:07:01.622 real 0m2.602s 00:07:01.622 user 0m2.825s 00:07:01.622 sys 0m0.771s 00:07:01.622 03:07:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.622 03:07:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.622 ************************************ 00:07:01.622 END TEST locking_app_on_locked_coremask 00:07:01.622 ************************************ 00:07:01.882 03:07:05 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:01.883 03:07:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:01.883 03:07:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.883 03:07:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.883 ************************************ 00:07:01.883 START TEST locking_overlapped_coremask 00:07:01.883 ************************************ 00:07:01.883 03:07:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:01.883 03:07:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71232 00:07:01.883 03:07:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:01.883 03:07:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71232 /var/tmp/spdk.sock 00:07:01.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.883 03:07:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71232 ']' 00:07:01.883 03:07:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.883 03:07:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.883 03:07:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.883 03:07:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.883 03:07:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.883 [2024-11-18 03:07:05.313043] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:01.883 [2024-11-18 03:07:05.313177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71232 ] 00:07:02.142 [2024-11-18 03:07:05.476703] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:02.142 [2024-11-18 03:07:05.529055] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.142 [2024-11-18 03:07:05.529033] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.142 [2024-11-18 03:07:05.529154] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.712 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.712 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:02.712 03:07:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:02.712 03:07:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71249 00:07:02.712 03:07:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71249 /var/tmp/spdk2.sock 00:07:02.712 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:02.712 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71249 /var/tmp/spdk2.sock 00:07:02.712 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:02.712 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.712 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:02.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:02.712 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.712 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71249 /var/tmp/spdk2.sock 00:07:02.712 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71249 ']' 00:07:02.712 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:02.712 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.712 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:02.712 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.712 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.712 [2024-11-18 03:07:06.235855] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:02.712 [2024-11-18 03:07:06.236114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71249 ] 00:07:02.973 [2024-11-18 03:07:06.396873] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71232 has claimed it. 00:07:02.973 [2024-11-18 03:07:06.396972] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:03.542 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71249) - No such process 00:07:03.542 ERROR: process (pid: 71249) is no longer running 00:07:03.542 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.542 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:03.542 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:03.542 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.542 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:03.542 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.542 03:07:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:03.542 03:07:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:03.543 03:07:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:03.543 03:07:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:03.543 03:07:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71232 00:07:03.543 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71232 ']' 00:07:03.543 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71232 00:07:03.543 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:03.543 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:03.543 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71232 00:07:03.543 killing process with pid 71232 00:07:03.543 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:03.543 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:03.543 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71232' 00:07:03.543 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71232 00:07:03.543 03:07:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71232 00:07:03.803 00:07:03.803 real 0m2.120s 00:07:03.803 user 0m5.640s 00:07:03.803 sys 0m0.525s 00:07:03.803 ************************************ 00:07:03.803 END TEST locking_overlapped_coremask 00:07:03.803 ************************************ 00:07:03.803 03:07:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.803 03:07:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.062 03:07:07 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:04.062 03:07:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.062 03:07:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.062 03:07:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.062 ************************************ 00:07:04.062 START TEST locking_overlapped_coremask_via_rpc 00:07:04.062 ************************************ 00:07:04.063 03:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:04.063 03:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71292 00:07:04.063 03:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:04.063 03:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71292 /var/tmp/spdk.sock 00:07:04.063 03:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71292 ']' 00:07:04.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.063 03:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.063 03:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.063 03:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.063 03:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.063 03:07:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.063 [2024-11-18 03:07:07.490585] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:04.063 [2024-11-18 03:07:07.490713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71292 ] 00:07:04.323 [2024-11-18 03:07:07.646168] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:04.323 [2024-11-18 03:07:07.646239] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:04.323 [2024-11-18 03:07:07.697194] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.323 [2024-11-18 03:07:07.697283] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.323 [2024-11-18 03:07:07.697380] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.892 03:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.893 03:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:04.893 03:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71310 00:07:04.893 03:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71310 /var/tmp/spdk2.sock 00:07:04.893 03:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:04.893 03:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71310 ']' 00:07:04.893 03:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:04.893 03:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.893 03:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:04.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:04.893 03:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.893 03:07:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.893 [2024-11-18 03:07:08.411393] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:04.893 [2024-11-18 03:07:08.411644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71310 ] 00:07:05.152 [2024-11-18 03:07:08.567192] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:05.152 [2024-11-18 03:07:08.567264] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:05.152 [2024-11-18 03:07:08.671565] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.152 [2024-11-18 03:07:08.671585] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.153 [2024-11-18 03:07:08.671632] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:05.726 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.726 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:05.726 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:05.726 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.726 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.726 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.726 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:05.726 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:05.726 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:05.726 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:05.726 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.726 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:05.726 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.726 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:05.726 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.726 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.726 [2024-11-18 03:07:09.267209] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71292 has claimed it. 00:07:05.726 request: 00:07:05.726 { 00:07:05.726 "method": "framework_enable_cpumask_locks", 00:07:05.727 "req_id": 1 00:07:05.727 } 00:07:05.727 Got JSON-RPC error response 00:07:05.727 response: 00:07:05.727 { 00:07:05.727 "code": -32603, 00:07:05.727 "message": "Failed to claim CPU core: 2" 00:07:05.727 } 00:07:05.727 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:05.727 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:05.727 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:05.727 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:05.727 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:05.727 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71292 /var/tmp/spdk.sock 00:07:05.727 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71292 ']' 00:07:05.727 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.727 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.727 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.727 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.727 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.987 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.987 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:05.987 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71310 /var/tmp/spdk2.sock 00:07:05.987 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71310 ']' 00:07:05.987 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.987 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.987 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.987 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.987 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.247 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.247 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:06.247 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:06.247 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:06.247 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:06.247 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:06.247 00:07:06.247 real 0m2.325s 00:07:06.247 user 0m1.099s 00:07:06.247 sys 0m0.151s 00:07:06.247 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.247 03:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.247 ************************************ 00:07:06.247 END TEST locking_overlapped_coremask_via_rpc 00:07:06.247 ************************************ 00:07:06.247 03:07:09 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:06.247 03:07:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71292 ]] 00:07:06.247 03:07:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71292 00:07:06.247 03:07:09 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71292 ']' 00:07:06.247 03:07:09 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71292 00:07:06.247 03:07:09 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:06.247 03:07:09 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:06.247 03:07:09 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71292 00:07:06.247 03:07:09 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:06.247 03:07:09 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:06.247 03:07:09 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71292' 00:07:06.247 killing process with pid 71292 00:07:06.247 03:07:09 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71292 00:07:06.247 03:07:09 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71292 00:07:06.817 03:07:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71310 ]] 00:07:06.817 03:07:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71310 00:07:06.817 03:07:10 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71310 ']' 00:07:06.817 03:07:10 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71310 00:07:06.817 03:07:10 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:06.817 03:07:10 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:06.817 03:07:10 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71310 00:07:06.817 killing process with pid 71310 00:07:06.817 03:07:10 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:06.817 03:07:10 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:06.817 03:07:10 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71310' 00:07:06.817 03:07:10 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71310 00:07:06.817 03:07:10 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71310 00:07:07.077 03:07:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:07.077 Process with pid 71292 is not found 00:07:07.077 Process with pid 71310 is not found 00:07:07.077 03:07:10 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:07.077 03:07:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71292 ]] 00:07:07.077 03:07:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71292 00:07:07.077 03:07:10 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71292 ']' 00:07:07.077 03:07:10 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71292 00:07:07.077 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71292) - No such process 00:07:07.077 03:07:10 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71292 is not found' 00:07:07.077 03:07:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71310 ]] 00:07:07.077 03:07:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71310 00:07:07.077 03:07:10 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71310 ']' 00:07:07.077 03:07:10 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71310 00:07:07.077 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71310) - No such process 00:07:07.077 03:07:10 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71310 is not found' 00:07:07.077 03:07:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:07.336 ************************************ 00:07:07.336 END TEST cpu_locks 00:07:07.336 ************************************ 00:07:07.336 00:07:07.336 real 0m19.344s 00:07:07.336 user 0m32.163s 00:07:07.336 sys 0m5.871s 00:07:07.336 03:07:10 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.336 03:07:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.336 ************************************ 00:07:07.336 END TEST event 00:07:07.336 ************************************ 00:07:07.337 00:07:07.337 real 0m47.110s 00:07:07.337 user 1m28.694s 00:07:07.337 sys 0m9.825s 00:07:07.337 03:07:10 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.337 03:07:10 event -- common/autotest_common.sh@10 -- # set +x 00:07:07.337 03:07:10 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:07.337 03:07:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.337 03:07:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.337 03:07:10 -- common/autotest_common.sh@10 -- # set +x 00:07:07.337 ************************************ 00:07:07.337 START TEST thread 00:07:07.337 ************************************ 00:07:07.337 03:07:10 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:07.337 * Looking for test storage... 00:07:07.337 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:07.337 03:07:10 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:07.337 03:07:10 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:07.337 03:07:10 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:07.597 03:07:10 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:07.597 03:07:10 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.597 03:07:10 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.597 03:07:10 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.597 03:07:10 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.597 03:07:10 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.597 03:07:10 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.597 03:07:10 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.597 03:07:10 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.597 03:07:10 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.597 03:07:10 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.597 03:07:10 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.597 03:07:10 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:07.597 03:07:10 thread -- scripts/common.sh@345 -- # : 1 00:07:07.597 03:07:10 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.597 03:07:10 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.597 03:07:10 thread -- scripts/common.sh@365 -- # decimal 1 00:07:07.597 03:07:10 thread -- scripts/common.sh@353 -- # local d=1 00:07:07.597 03:07:10 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.597 03:07:10 thread -- scripts/common.sh@355 -- # echo 1 00:07:07.597 03:07:10 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.597 03:07:10 thread -- scripts/common.sh@366 -- # decimal 2 00:07:07.597 03:07:10 thread -- scripts/common.sh@353 -- # local d=2 00:07:07.597 03:07:10 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.597 03:07:10 thread -- scripts/common.sh@355 -- # echo 2 00:07:07.597 03:07:10 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.597 03:07:10 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.597 03:07:10 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.597 03:07:10 thread -- scripts/common.sh@368 -- # return 0 00:07:07.597 03:07:10 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.597 03:07:10 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:07.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.597 --rc genhtml_branch_coverage=1 00:07:07.597 --rc genhtml_function_coverage=1 00:07:07.597 --rc genhtml_legend=1 00:07:07.597 --rc geninfo_all_blocks=1 00:07:07.597 --rc geninfo_unexecuted_blocks=1 00:07:07.597 00:07:07.597 ' 00:07:07.597 03:07:10 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:07.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.597 --rc genhtml_branch_coverage=1 00:07:07.597 --rc genhtml_function_coverage=1 00:07:07.597 --rc genhtml_legend=1 00:07:07.597 --rc geninfo_all_blocks=1 00:07:07.597 --rc geninfo_unexecuted_blocks=1 00:07:07.597 00:07:07.597 ' 00:07:07.597 03:07:10 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:07.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.597 --rc genhtml_branch_coverage=1 00:07:07.597 --rc genhtml_function_coverage=1 00:07:07.597 --rc genhtml_legend=1 00:07:07.597 --rc geninfo_all_blocks=1 00:07:07.597 --rc geninfo_unexecuted_blocks=1 00:07:07.597 00:07:07.597 ' 00:07:07.597 03:07:10 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:07.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.597 --rc genhtml_branch_coverage=1 00:07:07.597 --rc genhtml_function_coverage=1 00:07:07.597 --rc genhtml_legend=1 00:07:07.597 --rc geninfo_all_blocks=1 00:07:07.597 --rc geninfo_unexecuted_blocks=1 00:07:07.597 00:07:07.597 ' 00:07:07.597 03:07:10 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:07.597 03:07:10 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:07.597 03:07:10 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.597 03:07:10 thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.597 ************************************ 00:07:07.597 START TEST thread_poller_perf 00:07:07.597 ************************************ 00:07:07.597 03:07:10 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:07.597 [2024-11-18 03:07:11.028766] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:07.597 [2024-11-18 03:07:11.029006] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71439 ] 00:07:07.857 [2024-11-18 03:07:11.176116] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.857 [2024-11-18 03:07:11.225879] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.857 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:08.797 [2024-11-18T03:07:12.374Z] ====================================== 00:07:08.797 [2024-11-18T03:07:12.374Z] busy:2300934208 (cyc) 00:07:08.797 [2024-11-18T03:07:12.374Z] total_run_count: 399000 00:07:08.797 [2024-11-18T03:07:12.374Z] tsc_hz: 2290000000 (cyc) 00:07:08.797 [2024-11-18T03:07:12.374Z] ====================================== 00:07:08.797 [2024-11-18T03:07:12.374Z] poller_cost: 5766 (cyc), 2517 (nsec) 00:07:08.797 00:07:08.797 real 0m1.351s 00:07:08.797 user 0m1.155s 00:07:08.797 sys 0m0.089s 00:07:08.797 03:07:12 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.797 03:07:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:08.797 ************************************ 00:07:08.797 END TEST thread_poller_perf 00:07:08.797 ************************************ 00:07:09.057 03:07:12 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:09.057 03:07:12 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:09.057 03:07:12 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.057 03:07:12 thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.057 ************************************ 00:07:09.057 START TEST thread_poller_perf 00:07:09.057 ************************************ 00:07:09.057 03:07:12 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:09.057 [2024-11-18 03:07:12.442684] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:09.057 [2024-11-18 03:07:12.442886] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71481 ] 00:07:09.057 [2024-11-18 03:07:12.590025] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.318 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:09.318 [2024-11-18 03:07:12.639990] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.292 [2024-11-18T03:07:13.869Z] ====================================== 00:07:10.292 [2024-11-18T03:07:13.869Z] busy:2293409556 (cyc) 00:07:10.292 [2024-11-18T03:07:13.869Z] total_run_count: 5209000 00:07:10.292 [2024-11-18T03:07:13.869Z] tsc_hz: 2290000000 (cyc) 00:07:10.292 [2024-11-18T03:07:13.869Z] ====================================== 00:07:10.292 [2024-11-18T03:07:13.869Z] poller_cost: 440 (cyc), 192 (nsec) 00:07:10.292 ************************************ 00:07:10.292 00:07:10.292 real 0m1.336s 00:07:10.292 user 0m1.140s 00:07:10.292 sys 0m0.089s 00:07:10.292 03:07:13 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.292 03:07:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:10.292 END TEST thread_poller_perf 00:07:10.292 ************************************ 00:07:10.292 03:07:13 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:10.292 ************************************ 00:07:10.292 END TEST thread 00:07:10.292 ************************************ 00:07:10.292 00:07:10.292 real 0m3.023s 00:07:10.292 user 0m2.466s 00:07:10.292 sys 0m0.358s 00:07:10.292 03:07:13 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.292 03:07:13 thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.292 03:07:13 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:10.292 03:07:13 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:10.292 03:07:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.292 03:07:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.292 03:07:13 -- common/autotest_common.sh@10 -- # set +x 00:07:10.292 ************************************ 00:07:10.292 START TEST app_cmdline 00:07:10.292 ************************************ 00:07:10.292 03:07:13 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:10.552 * Looking for test storage... 00:07:10.552 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:10.552 03:07:13 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:10.552 03:07:13 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:10.552 03:07:13 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:10.552 03:07:14 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:10.552 03:07:14 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.552 03:07:14 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.552 03:07:14 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.552 03:07:14 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.552 03:07:14 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.552 03:07:14 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.552 03:07:14 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.552 03:07:14 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.552 03:07:14 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.552 03:07:14 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.552 03:07:14 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.552 03:07:14 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:10.552 03:07:14 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:10.552 03:07:14 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.552 03:07:14 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.552 03:07:14 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:10.552 03:07:14 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:10.552 03:07:14 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.552 03:07:14 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:10.552 03:07:14 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.552 03:07:14 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:10.552 03:07:14 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:10.552 03:07:14 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.552 03:07:14 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:10.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.552 03:07:14 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.552 03:07:14 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.552 03:07:14 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.552 03:07:14 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:10.552 03:07:14 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.552 03:07:14 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:10.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.552 --rc genhtml_branch_coverage=1 00:07:10.552 --rc genhtml_function_coverage=1 00:07:10.552 --rc genhtml_legend=1 00:07:10.552 --rc geninfo_all_blocks=1 00:07:10.552 --rc geninfo_unexecuted_blocks=1 00:07:10.552 00:07:10.552 ' 00:07:10.552 03:07:14 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:10.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.552 --rc genhtml_branch_coverage=1 00:07:10.552 --rc genhtml_function_coverage=1 00:07:10.552 --rc genhtml_legend=1 00:07:10.552 --rc geninfo_all_blocks=1 00:07:10.552 --rc geninfo_unexecuted_blocks=1 00:07:10.552 00:07:10.552 ' 00:07:10.552 03:07:14 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:10.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.552 --rc genhtml_branch_coverage=1 00:07:10.552 --rc genhtml_function_coverage=1 00:07:10.552 --rc genhtml_legend=1 00:07:10.552 --rc geninfo_all_blocks=1 00:07:10.552 --rc geninfo_unexecuted_blocks=1 00:07:10.552 00:07:10.552 ' 00:07:10.552 03:07:14 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:10.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.552 --rc genhtml_branch_coverage=1 00:07:10.552 --rc genhtml_function_coverage=1 00:07:10.552 --rc genhtml_legend=1 00:07:10.552 --rc geninfo_all_blocks=1 00:07:10.552 --rc geninfo_unexecuted_blocks=1 00:07:10.552 00:07:10.552 ' 00:07:10.552 03:07:14 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:10.552 03:07:14 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71559 00:07:10.552 03:07:14 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71559 00:07:10.552 03:07:14 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:10.552 03:07:14 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 71559 ']' 00:07:10.552 03:07:14 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.552 03:07:14 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.552 03:07:14 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.552 03:07:14 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.552 03:07:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:10.812 [2024-11-18 03:07:14.137407] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:10.812 [2024-11-18 03:07:14.137637] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71559 ] 00:07:10.812 [2024-11-18 03:07:14.281164] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.812 [2024-11-18 03:07:14.331104] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.751 03:07:14 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:11.751 03:07:14 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:11.751 03:07:14 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:11.751 { 00:07:11.751 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:07:11.751 "fields": { 00:07:11.751 "major": 24, 00:07:11.751 "minor": 9, 00:07:11.751 "patch": 1, 00:07:11.751 "suffix": "-pre", 00:07:11.751 "commit": "b18e1bd62" 00:07:11.751 } 00:07:11.751 } 00:07:11.751 03:07:15 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:11.751 03:07:15 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:11.751 03:07:15 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:11.751 03:07:15 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:11.751 03:07:15 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:11.751 03:07:15 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:11.751 03:07:15 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.751 03:07:15 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:11.751 03:07:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:11.751 03:07:15 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.751 03:07:15 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:11.751 03:07:15 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:11.751 03:07:15 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:11.751 03:07:15 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:11.752 03:07:15 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:11.752 03:07:15 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:11.752 03:07:15 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.752 03:07:15 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:11.752 03:07:15 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.752 03:07:15 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:11.752 03:07:15 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.752 03:07:15 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:11.752 03:07:15 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:11.752 03:07:15 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:12.012 request: 00:07:12.012 { 00:07:12.012 "method": "env_dpdk_get_mem_stats", 00:07:12.012 "req_id": 1 00:07:12.012 } 00:07:12.012 Got JSON-RPC error response 00:07:12.012 response: 00:07:12.012 { 00:07:12.012 "code": -32601, 00:07:12.012 "message": "Method not found" 00:07:12.012 } 00:07:12.012 03:07:15 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:12.012 03:07:15 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:12.012 03:07:15 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:12.012 03:07:15 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:12.012 03:07:15 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71559 00:07:12.012 03:07:15 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 71559 ']' 00:07:12.012 03:07:15 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 71559 00:07:12.012 03:07:15 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:12.012 03:07:15 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:12.012 03:07:15 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71559 00:07:12.012 killing process with pid 71559 00:07:12.012 03:07:15 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:12.012 03:07:15 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:12.012 03:07:15 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71559' 00:07:12.012 03:07:15 app_cmdline -- common/autotest_common.sh@969 -- # kill 71559 00:07:12.012 03:07:15 app_cmdline -- common/autotest_common.sh@974 -- # wait 71559 00:07:12.582 ************************************ 00:07:12.582 END TEST app_cmdline 00:07:12.582 ************************************ 00:07:12.582 00:07:12.582 real 0m2.028s 00:07:12.582 user 0m2.262s 00:07:12.582 sys 0m0.566s 00:07:12.582 03:07:15 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.582 03:07:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:12.582 03:07:15 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:12.582 03:07:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:12.582 03:07:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.582 03:07:15 -- common/autotest_common.sh@10 -- # set +x 00:07:12.582 ************************************ 00:07:12.582 START TEST version 00:07:12.582 ************************************ 00:07:12.582 03:07:15 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:12.582 * Looking for test storage... 00:07:12.582 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:12.582 03:07:16 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:12.582 03:07:16 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:12.582 03:07:16 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:12.582 03:07:16 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:12.582 03:07:16 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.582 03:07:16 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.582 03:07:16 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.582 03:07:16 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.582 03:07:16 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.582 03:07:16 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.582 03:07:16 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.582 03:07:16 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.582 03:07:16 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.582 03:07:16 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.582 03:07:16 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.582 03:07:16 version -- scripts/common.sh@344 -- # case "$op" in 00:07:12.582 03:07:16 version -- scripts/common.sh@345 -- # : 1 00:07:12.582 03:07:16 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.582 03:07:16 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.582 03:07:16 version -- scripts/common.sh@365 -- # decimal 1 00:07:12.582 03:07:16 version -- scripts/common.sh@353 -- # local d=1 00:07:12.582 03:07:16 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.582 03:07:16 version -- scripts/common.sh@355 -- # echo 1 00:07:12.582 03:07:16 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.582 03:07:16 version -- scripts/common.sh@366 -- # decimal 2 00:07:12.582 03:07:16 version -- scripts/common.sh@353 -- # local d=2 00:07:12.582 03:07:16 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.582 03:07:16 version -- scripts/common.sh@355 -- # echo 2 00:07:12.582 03:07:16 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.582 03:07:16 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.582 03:07:16 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.582 03:07:16 version -- scripts/common.sh@368 -- # return 0 00:07:12.582 03:07:16 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.582 03:07:16 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:12.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.582 --rc genhtml_branch_coverage=1 00:07:12.582 --rc genhtml_function_coverage=1 00:07:12.582 --rc genhtml_legend=1 00:07:12.582 --rc geninfo_all_blocks=1 00:07:12.582 --rc geninfo_unexecuted_blocks=1 00:07:12.582 00:07:12.582 ' 00:07:12.582 03:07:16 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:12.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.582 --rc genhtml_branch_coverage=1 00:07:12.582 --rc genhtml_function_coverage=1 00:07:12.582 --rc genhtml_legend=1 00:07:12.582 --rc geninfo_all_blocks=1 00:07:12.582 --rc geninfo_unexecuted_blocks=1 00:07:12.582 00:07:12.582 ' 00:07:12.582 03:07:16 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:12.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.582 --rc genhtml_branch_coverage=1 00:07:12.582 --rc genhtml_function_coverage=1 00:07:12.582 --rc genhtml_legend=1 00:07:12.582 --rc geninfo_all_blocks=1 00:07:12.582 --rc geninfo_unexecuted_blocks=1 00:07:12.582 00:07:12.582 ' 00:07:12.582 03:07:16 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:12.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.582 --rc genhtml_branch_coverage=1 00:07:12.582 --rc genhtml_function_coverage=1 00:07:12.582 --rc genhtml_legend=1 00:07:12.582 --rc geninfo_all_blocks=1 00:07:12.582 --rc geninfo_unexecuted_blocks=1 00:07:12.582 00:07:12.582 ' 00:07:12.582 03:07:16 version -- app/version.sh@17 -- # get_header_version major 00:07:12.582 03:07:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:12.582 03:07:16 version -- app/version.sh@14 -- # cut -f2 00:07:12.582 03:07:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:12.582 03:07:16 version -- app/version.sh@17 -- # major=24 00:07:12.582 03:07:16 version -- app/version.sh@18 -- # get_header_version minor 00:07:12.582 03:07:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:12.582 03:07:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:12.582 03:07:16 version -- app/version.sh@14 -- # cut -f2 00:07:12.842 03:07:16 version -- app/version.sh@18 -- # minor=9 00:07:12.842 03:07:16 version -- app/version.sh@19 -- # get_header_version patch 00:07:12.842 03:07:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:12.842 03:07:16 version -- app/version.sh@14 -- # cut -f2 00:07:12.842 03:07:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:12.842 03:07:16 version -- app/version.sh@19 -- # patch=1 00:07:12.842 03:07:16 version -- app/version.sh@20 -- # get_header_version suffix 00:07:12.842 03:07:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:12.842 03:07:16 version -- app/version.sh@14 -- # cut -f2 00:07:12.842 03:07:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:12.842 03:07:16 version -- app/version.sh@20 -- # suffix=-pre 00:07:12.842 03:07:16 version -- app/version.sh@22 -- # version=24.9 00:07:12.842 03:07:16 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:12.842 03:07:16 version -- app/version.sh@25 -- # version=24.9.1 00:07:12.842 03:07:16 version -- app/version.sh@28 -- # version=24.9.1rc0 00:07:12.842 03:07:16 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:12.842 03:07:16 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:12.842 03:07:16 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:07:12.842 03:07:16 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:07:12.842 ************************************ 00:07:12.842 END TEST version 00:07:12.842 ************************************ 00:07:12.842 00:07:12.842 real 0m0.308s 00:07:12.842 user 0m0.180s 00:07:12.842 sys 0m0.183s 00:07:12.842 03:07:16 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.842 03:07:16 version -- common/autotest_common.sh@10 -- # set +x 00:07:12.842 03:07:16 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:12.842 03:07:16 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:12.842 03:07:16 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:12.842 03:07:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:12.842 03:07:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.842 03:07:16 -- common/autotest_common.sh@10 -- # set +x 00:07:12.842 ************************************ 00:07:12.842 START TEST bdev_raid 00:07:12.842 ************************************ 00:07:12.842 03:07:16 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:12.842 * Looking for test storage... 00:07:13.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:13.102 03:07:16 bdev_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:13.102 03:07:16 bdev_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:07:13.102 03:07:16 bdev_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:13.102 03:07:16 bdev_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:13.102 03:07:16 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.102 03:07:16 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.102 03:07:16 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.102 03:07:16 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.102 03:07:16 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.102 03:07:16 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.102 03:07:16 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.102 03:07:16 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.102 03:07:16 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.102 03:07:16 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.102 03:07:16 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.102 03:07:16 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:13.102 03:07:16 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:13.102 03:07:16 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.102 03:07:16 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.102 03:07:16 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:13.102 03:07:16 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:13.102 03:07:16 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.102 03:07:16 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:13.102 03:07:16 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.102 03:07:16 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:13.102 03:07:16 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:13.102 03:07:16 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.102 03:07:16 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:13.102 03:07:16 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.102 03:07:16 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.102 03:07:16 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.102 03:07:16 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:13.102 03:07:16 bdev_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.102 03:07:16 bdev_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:13.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.102 --rc genhtml_branch_coverage=1 00:07:13.102 --rc genhtml_function_coverage=1 00:07:13.102 --rc genhtml_legend=1 00:07:13.102 --rc geninfo_all_blocks=1 00:07:13.102 --rc geninfo_unexecuted_blocks=1 00:07:13.102 00:07:13.102 ' 00:07:13.102 03:07:16 bdev_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:13.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.102 --rc genhtml_branch_coverage=1 00:07:13.102 --rc genhtml_function_coverage=1 00:07:13.102 --rc genhtml_legend=1 00:07:13.102 --rc geninfo_all_blocks=1 00:07:13.102 --rc geninfo_unexecuted_blocks=1 00:07:13.102 00:07:13.102 ' 00:07:13.102 03:07:16 bdev_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:13.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.102 --rc genhtml_branch_coverage=1 00:07:13.102 --rc genhtml_function_coverage=1 00:07:13.103 --rc genhtml_legend=1 00:07:13.103 --rc geninfo_all_blocks=1 00:07:13.103 --rc geninfo_unexecuted_blocks=1 00:07:13.103 00:07:13.103 ' 00:07:13.103 03:07:16 bdev_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:13.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.103 --rc genhtml_branch_coverage=1 00:07:13.103 --rc genhtml_function_coverage=1 00:07:13.103 --rc genhtml_legend=1 00:07:13.103 --rc geninfo_all_blocks=1 00:07:13.103 --rc geninfo_unexecuted_blocks=1 00:07:13.103 00:07:13.103 ' 00:07:13.103 03:07:16 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:13.103 03:07:16 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:13.103 03:07:16 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:13.103 03:07:16 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:13.103 03:07:16 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:13.103 03:07:16 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:13.103 03:07:16 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:13.103 03:07:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:13.103 03:07:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.103 03:07:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:13.103 ************************************ 00:07:13.103 START TEST raid1_resize_data_offset_test 00:07:13.103 ************************************ 00:07:13.103 03:07:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:07:13.103 03:07:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=71729 00:07:13.103 03:07:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 71729' 00:07:13.103 Process raid pid: 71729 00:07:13.103 03:07:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:13.103 03:07:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 71729 00:07:13.103 03:07:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 71729 ']' 00:07:13.103 03:07:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.103 03:07:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.103 03:07:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.103 03:07:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.103 03:07:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.103 [2024-11-18 03:07:16.604777] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:13.103 [2024-11-18 03:07:16.605017] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.363 [2024-11-18 03:07:16.769032] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.363 [2024-11-18 03:07:16.819333] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.363 [2024-11-18 03:07:16.861437] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.363 [2024-11-18 03:07:16.861554] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.933 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.933 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:07:13.933 03:07:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:13.933 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.933 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.933 malloc0 00:07:13.933 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.933 03:07:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:13.933 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.933 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.933 malloc1 00:07:13.933 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.933 03:07:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:13.933 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.933 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.193 null0 00:07:14.193 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.193 03:07:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:14.193 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.193 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.193 [2024-11-18 03:07:17.521815] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:14.193 [2024-11-18 03:07:17.523757] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:14.193 [2024-11-18 03:07:17.523857] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:14.193 [2024-11-18 03:07:17.524027] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:14.193 [2024-11-18 03:07:17.524046] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:14.193 [2024-11-18 03:07:17.524347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:07:14.193 [2024-11-18 03:07:17.524504] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:14.193 [2024-11-18 03:07:17.524518] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:07:14.193 [2024-11-18 03:07:17.524659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.193 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.193 03:07:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.193 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.193 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.193 03:07:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:14.193 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.193 03:07:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:14.193 03:07:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:14.193 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.193 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.193 [2024-11-18 03:07:17.605705] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:14.193 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.193 03:07:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:14.193 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.193 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.193 malloc2 00:07:14.193 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.193 03:07:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:14.193 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.193 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.193 [2024-11-18 03:07:17.728271] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:14.193 [2024-11-18 03:07:17.732635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:14.193 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.193 [2024-11-18 03:07:17.734707] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:14.193 03:07:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.193 03:07:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:14.193 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.193 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.193 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.453 03:07:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:14.453 03:07:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 71729 00:07:14.453 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 71729 ']' 00:07:14.453 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 71729 00:07:14.453 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:07:14.453 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.453 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71729 00:07:14.453 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.453 killing process with pid 71729 00:07:14.453 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.453 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71729' 00:07:14.453 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 71729 00:07:14.453 [2024-11-18 03:07:17.814165] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:14.453 03:07:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 71729 00:07:14.453 [2024-11-18 03:07:17.815825] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:14.453 [2024-11-18 03:07:17.815894] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.453 [2024-11-18 03:07:17.815912] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:14.453 [2024-11-18 03:07:17.821939] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:14.453 [2024-11-18 03:07:17.822247] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:14.453 [2024-11-18 03:07:17.822270] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:07:14.713 [2024-11-18 03:07:18.032560] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:14.713 ************************************ 00:07:14.713 END TEST raid1_resize_data_offset_test 00:07:14.713 ************************************ 00:07:14.713 03:07:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:14.713 00:07:14.713 real 0m1.750s 00:07:14.713 user 0m1.777s 00:07:14.713 sys 0m0.419s 00:07:14.713 03:07:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.713 03:07:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.973 03:07:18 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:14.973 03:07:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:14.973 03:07:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.973 03:07:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:14.973 ************************************ 00:07:14.973 START TEST raid0_resize_superblock_test 00:07:14.973 ************************************ 00:07:14.973 03:07:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:07:14.973 03:07:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:14.973 Process raid pid: 71775 00:07:14.973 03:07:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71775 00:07:14.973 03:07:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:14.973 03:07:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71775' 00:07:14.973 03:07:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71775 00:07:14.973 03:07:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71775 ']' 00:07:14.973 03:07:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.973 03:07:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.973 03:07:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.973 03:07:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.973 03:07:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.973 [2024-11-18 03:07:18.419612] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:14.973 [2024-11-18 03:07:18.419846] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.233 [2024-11-18 03:07:18.581884] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.233 [2024-11-18 03:07:18.634513] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.233 [2024-11-18 03:07:18.677015] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.233 [2024-11-18 03:07:18.677051] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.809 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.809 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:15.809 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:15.809 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.809 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.071 malloc0 00:07:16.071 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.071 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:16.071 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.071 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.071 [2024-11-18 03:07:19.389947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:16.071 [2024-11-18 03:07:19.390108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:16.071 [2024-11-18 03:07:19.390141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:16.071 [2024-11-18 03:07:19.390153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:16.071 [2024-11-18 03:07:19.392527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:16.071 [2024-11-18 03:07:19.392574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:16.071 pt0 00:07:16.071 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.071 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:16.071 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.071 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.071 88b3d323-1280-4bbe-992c-def60f734a7f 00:07:16.071 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.071 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:16.071 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.071 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.071 23a2d378-3256-4c3a-a54d-d430c19f15af 00:07:16.071 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.071 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:16.071 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.071 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.071 0aa9c202-d890-47e0-8f15-04b0cc00c281 00:07:16.071 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.071 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:16.071 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:16.071 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.071 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.071 [2024-11-18 03:07:19.530015] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 23a2d378-3256-4c3a-a54d-d430c19f15af is claimed 00:07:16.071 [2024-11-18 03:07:19.530126] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0aa9c202-d890-47e0-8f15-04b0cc00c281 is claimed 00:07:16.071 [2024-11-18 03:07:19.530267] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:16.071 [2024-11-18 03:07:19.530292] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:16.071 [2024-11-18 03:07:19.530586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:16.071 [2024-11-18 03:07:19.530755] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:16.071 [2024-11-18 03:07:19.530770] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:07:16.071 [2024-11-18 03:07:19.530930] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.071 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.072 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:16.072 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:16.072 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.072 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.072 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.072 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:16.072 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:16.072 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:16.072 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.072 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.072 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.072 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:16.072 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:16.072 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:16.072 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:16.072 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:16.072 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.072 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.072 [2024-11-18 03:07:19.630075] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:16.332 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.332 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:16.332 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:16.332 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:16.332 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:16.332 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.332 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.332 [2024-11-18 03:07:19.661953] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:16.333 [2024-11-18 03:07:19.662000] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '23a2d378-3256-4c3a-a54d-d430c19f15af' was resized: old size 131072, new size 204800 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.333 [2024-11-18 03:07:19.673874] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:16.333 [2024-11-18 03:07:19.673902] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '0aa9c202-d890-47e0-8f15-04b0cc00c281' was resized: old size 131072, new size 204800 00:07:16.333 [2024-11-18 03:07:19.673933] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:16.333 [2024-11-18 03:07:19.781748] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.333 [2024-11-18 03:07:19.829539] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:16.333 [2024-11-18 03:07:19.829637] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:16.333 [2024-11-18 03:07:19.829650] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:16.333 [2024-11-18 03:07:19.829666] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:16.333 [2024-11-18 03:07:19.829793] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:16.333 [2024-11-18 03:07:19.829830] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:16.333 [2024-11-18 03:07:19.829844] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.333 [2024-11-18 03:07:19.841383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:16.333 [2024-11-18 03:07:19.841454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:16.333 [2024-11-18 03:07:19.841476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:16.333 [2024-11-18 03:07:19.841489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:16.333 [2024-11-18 03:07:19.843781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:16.333 [2024-11-18 03:07:19.843879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:16.333 [2024-11-18 03:07:19.845537] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 23a2d378-3256-4c3a-a54d-d430c19f15af 00:07:16.333 [2024-11-18 03:07:19.845600] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 23a2d378-3256-4c3a-a54d-d430c19f15af is claimed 00:07:16.333 [2024-11-18 03:07:19.845676] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 0aa9c202-d890-47e0-8f15-04b0cc00c281 00:07:16.333 [2024-11-18 03:07:19.845694] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0aa9c202-d890-47e0-8f15-04b0cc00c281 is claimed 00:07:16.333 [2024-11-18 03:07:19.845825] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 0aa9c202-d890-47e0-8f15-04b0cc00c281 (2) smaller than existing raid bdev Raid (3) 00:07:16.333 [2024-11-18 03:07:19.845847] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 23a2d378-3256-4c3a-a54d-d430c19f15af: File exists 00:07:16.333 [2024-11-18 03:07:19.845880] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:07:16.333 [2024-11-18 03:07:19.845889] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:16.333 [2024-11-18 03:07:19.846143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:16.333 pt0 00:07:16.333 [2024-11-18 03:07:19.846262] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:07:16.333 [2024-11-18 03:07:19.846276] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600 00:07:16.333 [2024-11-18 03:07:19.846409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.333 [2024-11-18 03:07:19.869892] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71775 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71775 ']' 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71775 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:16.333 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:16.593 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71775 00:07:16.593 killing process with pid 71775 00:07:16.593 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:16.593 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:16.593 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71775' 00:07:16.593 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71775 00:07:16.593 [2024-11-18 03:07:19.941023] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:16.593 [2024-11-18 03:07:19.941115] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:16.593 [2024-11-18 03:07:19.941161] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:16.593 [2024-11-18 03:07:19.941171] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline 00:07:16.593 03:07:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71775 00:07:16.593 [2024-11-18 03:07:20.100164] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:16.853 ************************************ 00:07:16.853 END TEST raid0_resize_superblock_test 00:07:16.853 ************************************ 00:07:16.853 03:07:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:16.853 00:07:16.853 real 0m2.008s 00:07:16.853 user 0m2.260s 00:07:16.853 sys 0m0.496s 00:07:16.853 03:07:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.853 03:07:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.853 03:07:20 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:16.853 03:07:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:16.853 03:07:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.853 03:07:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:16.853 ************************************ 00:07:16.853 START TEST raid1_resize_superblock_test 00:07:16.853 ************************************ 00:07:16.853 03:07:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:07:16.853 03:07:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:16.853 03:07:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71846 00:07:16.853 Process raid pid: 71846 00:07:16.853 03:07:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:16.853 03:07:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71846' 00:07:16.853 03:07:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71846 00:07:16.853 03:07:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71846 ']' 00:07:16.853 03:07:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.853 03:07:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.853 03:07:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.853 03:07:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.853 03:07:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.114 [2024-11-18 03:07:20.474122] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:17.114 [2024-11-18 03:07:20.474671] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.114 [2024-11-18 03:07:20.637391] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.114 [2024-11-18 03:07:20.687902] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.374 [2024-11-18 03:07:20.731968] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.374 [2024-11-18 03:07:20.732001] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.945 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.945 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:17.945 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:17.945 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.945 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.945 malloc0 00:07:17.945 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.945 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:17.945 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.945 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.945 [2024-11-18 03:07:21.454149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:17.945 [2024-11-18 03:07:21.454219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:17.945 [2024-11-18 03:07:21.454261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:17.945 [2024-11-18 03:07:21.454272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:17.945 [2024-11-18 03:07:21.456603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:17.945 [2024-11-18 03:07:21.456659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:17.945 pt0 00:07:17.945 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.945 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:17.945 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.945 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.206 31e1d8f3-cb9d-4f10-b1b7-8ad29630feb1 00:07:18.206 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.206 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:18.206 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.206 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.206 e0f0d872-2736-40d9-8e76-2662d4ce32bc 00:07:18.206 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.206 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:18.206 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.206 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.206 f8e2cd82-867c-4545-b2a5-73acaa918aea 00:07:18.206 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.206 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:18.206 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:18.206 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.206 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.206 [2024-11-18 03:07:21.590161] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev e0f0d872-2736-40d9-8e76-2662d4ce32bc is claimed 00:07:18.206 [2024-11-18 03:07:21.590348] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev f8e2cd82-867c-4545-b2a5-73acaa918aea is claimed 00:07:18.206 [2024-11-18 03:07:21.590534] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:18.206 [2024-11-18 03:07:21.590595] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:18.206 [2024-11-18 03:07:21.590950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:18.206 [2024-11-18 03:07:21.591217] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:18.206 [2024-11-18 03:07:21.591274] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:07:18.206 [2024-11-18 03:07:21.591481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.206 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.206 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:18.206 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:18.206 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.206 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.206 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.206 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.207 [2024-11-18 03:07:21.694325] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.207 [2024-11-18 03:07:21.718207] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:18.207 [2024-11-18 03:07:21.718244] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'e0f0d872-2736-40d9-8e76-2662d4ce32bc' was resized: old size 131072, new size 204800 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.207 [2024-11-18 03:07:21.726093] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:18.207 [2024-11-18 03:07:21.726118] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f8e2cd82-867c-4545-b2a5-73acaa918aea' was resized: old size 131072, new size 204800 00:07:18.207 [2024-11-18 03:07:21.726147] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.207 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.468 [2024-11-18 03:07:21.834136] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.468 [2024-11-18 03:07:21.869764] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:18.468 [2024-11-18 03:07:21.869899] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:18.468 [2024-11-18 03:07:21.869942] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:18.468 [2024-11-18 03:07:21.870187] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:18.468 [2024-11-18 03:07:21.870368] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:18.468 [2024-11-18 03:07:21.870425] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:18.468 [2024-11-18 03:07:21.870439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.468 [2024-11-18 03:07:21.881657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:18.468 [2024-11-18 03:07:21.881769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:18.468 [2024-11-18 03:07:21.881809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:18.468 [2024-11-18 03:07:21.881845] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:18.468 [2024-11-18 03:07:21.884127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:18.468 [2024-11-18 03:07:21.884207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:18.468 [2024-11-18 03:07:21.885804] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev e0f0d872-2736-40d9-8e76-2662d4ce32bc 00:07:18.468 [2024-11-18 03:07:21.885923] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev e0f0d872-2736-40d9-8e76-2662d4ce32bc is claimed 00:07:18.468 [2024-11-18 03:07:21.886085] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f8e2cd82-867c-4545-b2a5-73acaa918aea 00:07:18.468 [2024-11-18 03:07:21.886182] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev f8e2cd82-867c-4545-b2a5-73acaa918aea is claimed 00:07:18.468 [2024-11-18 03:07:21.886379] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev f8e2cd82-867c-4545-b2a5-73acaa918aea (2) smaller than existing raid bdev Raid (3) 00:07:18.468 [2024-11-18 03:07:21.886445] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev e0f0d872-2736-40d9-8e76-2662d4ce32bc: File exists 00:07:18.468 [2024-11-18 03:07:21.886518] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:07:18.468 [2024-11-18 03:07:21.886547] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:18.468 [2024-11-18 03:07:21.886815] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:18.468 [2024-11-18 03:07:21.886994] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:07:18.468 pt0 00:07:18.468 [2024-11-18 03:07:21.887041] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600 00:07:18.468 [2024-11-18 03:07:21.887229] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.468 [2024-11-18 03:07:21.910190] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:18.468 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:18.469 03:07:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71846 00:07:18.469 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71846 ']' 00:07:18.469 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71846 00:07:18.469 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:18.469 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:18.469 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71846 00:07:18.469 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:18.469 killing process with pid 71846 00:07:18.469 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:18.469 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71846' 00:07:18.469 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71846 00:07:18.469 [2024-11-18 03:07:21.983800] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:18.469 [2024-11-18 03:07:21.983891] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:18.469 [2024-11-18 03:07:21.983948] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:18.469 [2024-11-18 03:07:21.983976] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline 00:07:18.469 03:07:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71846 00:07:18.731 [2024-11-18 03:07:22.143487] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:18.991 03:07:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:18.991 00:07:18.991 real 0m1.975s 00:07:18.991 user 0m2.251s 00:07:18.991 sys 0m0.455s 00:07:18.991 03:07:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.991 ************************************ 00:07:18.991 END TEST raid1_resize_superblock_test 00:07:18.991 ************************************ 00:07:18.991 03:07:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.991 03:07:22 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:18.991 03:07:22 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:18.991 03:07:22 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:18.991 03:07:22 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:18.991 03:07:22 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:18.991 03:07:22 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:18.991 03:07:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:18.991 03:07:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.991 03:07:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:18.991 ************************************ 00:07:18.991 START TEST raid_function_test_raid0 00:07:18.991 ************************************ 00:07:18.991 03:07:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:07:18.991 03:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:18.991 03:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:18.991 03:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:18.991 03:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=71921 00:07:18.991 03:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:18.991 03:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71921' 00:07:18.991 Process raid pid: 71921 00:07:18.991 03:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 71921 00:07:18.991 03:07:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 71921 ']' 00:07:18.991 03:07:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.991 03:07:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.991 03:07:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.991 03:07:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.991 03:07:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:19.251 [2024-11-18 03:07:22.580724] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:19.251 [2024-11-18 03:07:22.581056] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.251 [2024-11-18 03:07:22.757672] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.251 [2024-11-18 03:07:22.808473] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.510 [2024-11-18 03:07:22.851605] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.510 [2024-11-18 03:07:22.851711] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.080 03:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.080 03:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:07:20.080 03:07:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:20.080 03:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.080 03:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:20.080 Base_1 00:07:20.080 03:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.080 03:07:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:20.080 03:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.080 03:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:20.080 Base_2 00:07:20.080 03:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.080 03:07:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:20.080 03:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.080 03:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:20.080 [2024-11-18 03:07:23.484870] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:20.080 [2024-11-18 03:07:23.486912] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:20.080 [2024-11-18 03:07:23.487009] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:20.080 [2024-11-18 03:07:23.487022] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:20.080 [2024-11-18 03:07:23.487303] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:20.080 [2024-11-18 03:07:23.487435] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:20.080 [2024-11-18 03:07:23.487451] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280 00:07:20.080 [2024-11-18 03:07:23.487624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.081 03:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.081 03:07:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:20.081 03:07:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:20.081 03:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.081 03:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:20.081 03:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.081 03:07:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:20.081 03:07:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:20.081 03:07:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:20.081 03:07:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:20.081 03:07:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:20.081 03:07:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:20.081 03:07:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:20.081 03:07:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:20.081 03:07:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:20.081 03:07:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:20.081 03:07:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:20.081 03:07:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:20.341 [2024-11-18 03:07:23.732533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:20.341 /dev/nbd0 00:07:20.341 03:07:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:20.341 03:07:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:20.341 03:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:20.341 03:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:07:20.341 03:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:20.341 03:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:20.341 03:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:20.341 03:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:07:20.341 03:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:20.341 03:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:20.341 03:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.341 1+0 records in 00:07:20.341 1+0 records out 00:07:20.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580287 s, 7.1 MB/s 00:07:20.341 03:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.341 03:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:07:20.341 03:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.341 03:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:20.341 03:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:07:20.341 03:07:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.341 03:07:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:20.341 03:07:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:20.341 03:07:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:20.341 03:07:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:20.601 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:20.601 { 00:07:20.601 "nbd_device": "/dev/nbd0", 00:07:20.601 "bdev_name": "raid" 00:07:20.601 } 00:07:20.601 ]' 00:07:20.601 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:20.601 { 00:07:20.601 "nbd_device": "/dev/nbd0", 00:07:20.601 "bdev_name": "raid" 00:07:20.601 } 00:07:20.601 ]' 00:07:20.601 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:20.601 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:20.601 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:20.601 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:20.601 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:20.601 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:20.601 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:20.601 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:20.601 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:20.601 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:20.601 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:20.601 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:20.601 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:20.601 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:20.601 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:20.601 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:20.601 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:20.601 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:20.601 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:20.601 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:20.601 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:20.601 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:20.601 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:20.601 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:20.601 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:20.601 4096+0 records in 00:07:20.601 4096+0 records out 00:07:20.601 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0359342 s, 58.4 MB/s 00:07:20.601 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:20.861 4096+0 records in 00:07:20.861 4096+0 records out 00:07:20.861 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.203147 s, 10.3 MB/s 00:07:20.861 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:20.861 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:20.861 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:20.861 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:20.861 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:20.861 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:20.861 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:20.861 128+0 records in 00:07:20.861 128+0 records out 00:07:20.861 65536 bytes (66 kB, 64 KiB) copied, 0.0013961 s, 46.9 MB/s 00:07:20.861 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:20.861 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:20.861 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:20.861 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:20.861 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:20.861 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:20.861 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:20.861 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:20.861 2035+0 records in 00:07:20.861 2035+0 records out 00:07:20.861 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0128719 s, 80.9 MB/s 00:07:20.861 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:20.861 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:20.861 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:20.861 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:20.861 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:20.861 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:20.861 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:20.861 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:20.861 456+0 records in 00:07:20.861 456+0 records out 00:07:20.861 233472 bytes (233 kB, 228 KiB) copied, 0.00408895 s, 57.1 MB/s 00:07:20.861 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:20.861 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:20.861 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:21.121 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:21.121 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:21.121 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:21.121 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:21.121 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:21.121 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:21.121 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:21.121 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:21.121 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.121 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:21.121 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:21.121 [2024-11-18 03:07:24.662462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:21.121 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:21.121 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:21.121 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.121 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.121 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:21.121 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:21.121 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.121 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:21.122 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:21.122 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:21.381 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:21.381 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:21.381 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:21.381 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:21.381 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:21.381 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:21.381 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:21.381 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:21.381 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:21.381 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:21.381 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:21.381 03:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 71921 00:07:21.381 03:07:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 71921 ']' 00:07:21.381 03:07:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 71921 00:07:21.381 03:07:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:07:21.641 03:07:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:21.641 03:07:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71921 00:07:21.641 killing process with pid 71921 00:07:21.641 03:07:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:21.641 03:07:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:21.641 03:07:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71921' 00:07:21.641 03:07:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 71921 00:07:21.641 [2024-11-18 03:07:24.993915] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:21.641 [2024-11-18 03:07:24.994052] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:21.641 [2024-11-18 03:07:24.994111] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:21.641 03:07:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 71921 00:07:21.641 [2024-11-18 03:07:24.994125] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline 00:07:21.641 [2024-11-18 03:07:25.017625] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:21.901 ************************************ 00:07:21.901 END TEST raid_function_test_raid0 00:07:21.901 ************************************ 00:07:21.901 03:07:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:21.901 00:07:21.901 real 0m2.787s 00:07:21.901 user 0m3.475s 00:07:21.901 sys 0m0.941s 00:07:21.901 03:07:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.901 03:07:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:21.901 03:07:25 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:21.901 03:07:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:21.901 03:07:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.901 03:07:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:21.901 ************************************ 00:07:21.901 START TEST raid_function_test_concat 00:07:21.901 ************************************ 00:07:21.901 03:07:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:07:21.901 03:07:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:21.901 03:07:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:21.901 03:07:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:21.901 03:07:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=72041 00:07:21.901 03:07:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:21.901 Process raid pid: 72041 00:07:21.901 03:07:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 72041' 00:07:21.901 03:07:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 72041 00:07:21.902 03:07:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 72041 ']' 00:07:21.902 03:07:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.902 03:07:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.902 03:07:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.902 03:07:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.902 03:07:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:21.902 [2024-11-18 03:07:25.410199] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:21.902 [2024-11-18 03:07:25.410343] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.161 [2024-11-18 03:07:25.572179] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.161 [2024-11-18 03:07:25.622887] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.161 [2024-11-18 03:07:25.664865] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.161 [2024-11-18 03:07:25.664923] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.731 03:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.731 03:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:07:22.731 03:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:22.731 03:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.731 03:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:22.731 Base_1 00:07:22.731 03:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.731 03:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:22.731 03:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.731 03:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:22.731 Base_2 00:07:22.731 03:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.731 03:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:22.731 03:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.731 03:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:22.731 [2024-11-18 03:07:26.296746] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:22.731 [2024-11-18 03:07:26.298593] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:22.731 [2024-11-18 03:07:26.298668] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:22.731 [2024-11-18 03:07:26.298680] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:22.731 [2024-11-18 03:07:26.298948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:22.731 [2024-11-18 03:07:26.299119] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:22.731 [2024-11-18 03:07:26.299135] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280 00:07:22.731 [2024-11-18 03:07:26.299317] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:22.731 03:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.731 03:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:22.990 03:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:22.991 03:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.991 03:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:22.991 03:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.991 03:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:22.991 03:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:22.991 03:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:22.991 03:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:22.991 03:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:22.991 03:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:22.991 03:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:22.991 03:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:22.991 03:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:22.991 03:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:22.991 03:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:22.991 03:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:23.250 [2024-11-18 03:07:26.592304] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:23.250 /dev/nbd0 00:07:23.250 03:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:23.250 03:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:23.250 03:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:23.250 03:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:07:23.250 03:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:23.250 03:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:23.250 03:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:23.250 03:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:07:23.250 03:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:23.250 03:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:23.250 03:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.250 1+0 records in 00:07:23.250 1+0 records out 00:07:23.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323888 s, 12.6 MB/s 00:07:23.250 03:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.250 03:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:07:23.250 03:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.250 03:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:23.250 03:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:07:23.250 03:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.250 03:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:23.250 03:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:23.250 03:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:23.250 03:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:23.510 03:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:23.510 { 00:07:23.510 "nbd_device": "/dev/nbd0", 00:07:23.510 "bdev_name": "raid" 00:07:23.510 } 00:07:23.510 ]' 00:07:23.510 03:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:23.510 { 00:07:23.510 "nbd_device": "/dev/nbd0", 00:07:23.510 "bdev_name": "raid" 00:07:23.510 } 00:07:23.510 ]' 00:07:23.510 03:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:23.510 03:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:23.510 03:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:23.510 03:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:23.510 03:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:23.510 03:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:23.510 03:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:23.510 03:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:23.510 03:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:23.510 03:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:23.510 03:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:23.510 03:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:23.510 03:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:23.510 03:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:23.510 03:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:23.510 03:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:23.510 03:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:23.510 03:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:23.510 03:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:23.510 03:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:23.510 03:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:23.510 03:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:23.510 03:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:23.510 03:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:23.510 03:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:23.510 4096+0 records in 00:07:23.510 4096+0 records out 00:07:23.510 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0350316 s, 59.9 MB/s 00:07:23.510 03:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:23.770 4096+0 records in 00:07:23.770 4096+0 records out 00:07:23.770 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.195299 s, 10.7 MB/s 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:23.770 128+0 records in 00:07:23.770 128+0 records out 00:07:23.770 65536 bytes (66 kB, 64 KiB) copied, 0.00115906 s, 56.5 MB/s 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:23.770 2035+0 records in 00:07:23.770 2035+0 records out 00:07:23.770 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0122864 s, 84.8 MB/s 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:23.770 456+0 records in 00:07:23.770 456+0 records out 00:07:23.770 233472 bytes (233 kB, 228 KiB) copied, 0.00361826 s, 64.5 MB/s 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.770 03:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:24.030 03:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:24.030 [2024-11-18 03:07:27.490585] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.030 03:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:24.030 03:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:24.030 03:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.030 03:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.031 03:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:24.031 03:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:24.031 03:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.031 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:24.031 03:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:24.031 03:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:24.315 03:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:24.315 03:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:24.315 03:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:24.315 03:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:24.315 03:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:24.315 03:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:24.315 03:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:24.315 03:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:24.315 03:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:24.315 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:24.315 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:24.315 03:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 72041 00:07:24.315 03:07:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 72041 ']' 00:07:24.315 03:07:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 72041 00:07:24.315 03:07:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:07:24.315 03:07:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:24.315 03:07:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72041 00:07:24.315 03:07:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:24.315 03:07:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:24.315 killing process with pid 72041 00:07:24.315 03:07:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72041' 00:07:24.315 03:07:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 72041 00:07:24.315 [2024-11-18 03:07:27.818313] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:24.315 [2024-11-18 03:07:27.818445] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:24.315 [2024-11-18 03:07:27.818502] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:24.315 [2024-11-18 03:07:27.818514] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline 00:07:24.315 03:07:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 72041 00:07:24.315 [2024-11-18 03:07:27.841696] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:24.575 03:07:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:24.575 00:07:24.575 real 0m2.755s 00:07:24.575 user 0m3.480s 00:07:24.575 sys 0m0.889s 00:07:24.575 03:07:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.575 03:07:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:24.575 ************************************ 00:07:24.575 END TEST raid_function_test_concat 00:07:24.575 ************************************ 00:07:24.575 03:07:28 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:24.575 03:07:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:24.575 03:07:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.575 03:07:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:24.834 ************************************ 00:07:24.835 START TEST raid0_resize_test 00:07:24.835 ************************************ 00:07:24.835 03:07:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:07:24.835 03:07:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:24.835 03:07:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:24.835 03:07:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:24.835 03:07:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:24.835 03:07:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:24.835 03:07:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:24.835 03:07:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:24.835 03:07:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:24.835 03:07:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=72158 00:07:24.835 03:07:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:24.835 Process raid pid: 72158 00:07:24.835 03:07:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 72158' 00:07:24.835 03:07:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 72158 00:07:24.835 03:07:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 72158 ']' 00:07:24.835 03:07:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.835 03:07:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.835 03:07:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.835 03:07:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.835 03:07:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.835 [2024-11-18 03:07:28.232310] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:24.835 [2024-11-18 03:07:28.232473] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.835 [2024-11-18 03:07:28.393600] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.094 [2024-11-18 03:07:28.443400] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.094 [2024-11-18 03:07:28.485562] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:25.094 [2024-11-18 03:07:28.485606] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.665 Base_1 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.665 Base_2 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.665 [2024-11-18 03:07:29.078932] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:25.665 [2024-11-18 03:07:29.080895] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:25.665 [2024-11-18 03:07:29.080971] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:25.665 [2024-11-18 03:07:29.080994] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:25.665 [2024-11-18 03:07:29.081253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:07:25.665 [2024-11-18 03:07:29.081375] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:25.665 [2024-11-18 03:07:29.081388] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:07:25.665 [2024-11-18 03:07:29.081537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.665 [2024-11-18 03:07:29.086863] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:25.665 [2024-11-18 03:07:29.086892] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:25.665 true 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.665 [2024-11-18 03:07:29.099072] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.665 [2024-11-18 03:07:29.150792] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:25.665 [2024-11-18 03:07:29.150825] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:25.665 [2024-11-18 03:07:29.150854] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:25.665 true 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:25.665 [2024-11-18 03:07:29.162987] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 72158 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 72158 ']' 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 72158 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:25.665 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72158 00:07:25.926 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:25.926 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:25.926 killing process with pid 72158 00:07:25.926 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72158' 00:07:25.926 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 72158 00:07:25.926 [2024-11-18 03:07:29.252367] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:25.926 [2024-11-18 03:07:29.252496] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:25.926 [2024-11-18 03:07:29.252555] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:25.926 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 72158 00:07:25.926 [2024-11-18 03:07:29.252566] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:07:25.926 [2024-11-18 03:07:29.254192] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:25.926 03:07:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:25.926 00:07:25.926 real 0m1.345s 00:07:25.926 user 0m1.500s 00:07:25.926 sys 0m0.315s 00:07:25.926 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.926 03:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.926 ************************************ 00:07:25.926 END TEST raid0_resize_test 00:07:25.926 ************************************ 00:07:26.186 03:07:29 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:26.186 03:07:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:26.186 03:07:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.186 03:07:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:26.186 ************************************ 00:07:26.186 START TEST raid1_resize_test 00:07:26.186 ************************************ 00:07:26.186 03:07:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:07:26.186 03:07:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:26.186 03:07:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:26.186 03:07:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:26.186 03:07:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:26.186 03:07:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:26.186 03:07:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:26.186 03:07:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:26.186 03:07:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:26.186 03:07:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=72203 00:07:26.186 03:07:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:26.186 Process raid pid: 72203 00:07:26.186 03:07:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 72203' 00:07:26.187 03:07:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 72203 00:07:26.187 03:07:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 72203 ']' 00:07:26.187 03:07:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.187 03:07:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.187 03:07:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.187 03:07:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.187 03:07:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.187 [2024-11-18 03:07:29.646513] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:26.187 [2024-11-18 03:07:29.646670] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.446 [2024-11-18 03:07:29.809195] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.446 [2024-11-18 03:07:29.859724] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.446 [2024-11-18 03:07:29.901800] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.446 [2024-11-18 03:07:29.901843] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.015 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.015 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:27.015 03:07:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:27.015 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.015 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.015 Base_1 00:07:27.015 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.015 03:07:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:27.015 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.015 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.015 Base_2 00:07:27.015 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.015 03:07:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:27.015 03:07:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:27.015 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.015 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.015 [2024-11-18 03:07:30.519154] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:27.015 [2024-11-18 03:07:30.521057] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:27.015 [2024-11-18 03:07:30.521139] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:27.015 [2024-11-18 03:07:30.521150] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:27.015 [2024-11-18 03:07:30.521410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:07:27.015 [2024-11-18 03:07:30.521525] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:27.015 [2024-11-18 03:07:30.521535] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:07:27.015 [2024-11-18 03:07:30.521672] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.015 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.015 03:07:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:27.015 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.015 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.015 [2024-11-18 03:07:30.531066] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:27.015 [2024-11-18 03:07:30.531097] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:27.015 true 00:07:27.015 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.015 03:07:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:27.016 03:07:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:27.016 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.016 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.016 [2024-11-18 03:07:30.547263] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.016 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.016 03:07:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:27.016 03:07:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:27.016 03:07:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:27.016 03:07:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:27.016 03:07:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:27.016 03:07:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:27.016 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.016 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.276 [2024-11-18 03:07:30.590997] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:27.276 [2024-11-18 03:07:30.591028] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:27.276 [2024-11-18 03:07:30.591057] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:27.276 true 00:07:27.276 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.276 03:07:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:27.276 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.276 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.276 03:07:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:27.276 [2024-11-18 03:07:30.603180] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.276 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.276 03:07:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:27.276 03:07:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:27.276 03:07:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:27.276 03:07:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:27.276 03:07:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:27.276 03:07:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 72203 00:07:27.276 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 72203 ']' 00:07:27.276 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 72203 00:07:27.276 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:27.276 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:27.276 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72203 00:07:27.276 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:27.276 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:27.276 killing process with pid 72203 00:07:27.276 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72203' 00:07:27.276 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 72203 00:07:27.276 [2024-11-18 03:07:30.691708] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:27.276 [2024-11-18 03:07:30.691817] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:27.276 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 72203 00:07:27.276 [2024-11-18 03:07:30.692321] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:27.276 [2024-11-18 03:07:30.692349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:07:27.276 [2024-11-18 03:07:30.693565] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:27.537 03:07:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:27.537 00:07:27.537 real 0m1.373s 00:07:27.537 user 0m1.550s 00:07:27.537 sys 0m0.307s 00:07:27.537 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.537 03:07:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.537 ************************************ 00:07:27.537 END TEST raid1_resize_test 00:07:27.537 ************************************ 00:07:27.537 03:07:30 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:27.537 03:07:30 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:27.537 03:07:30 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:27.537 03:07:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:27.537 03:07:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.537 03:07:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:27.537 ************************************ 00:07:27.537 START TEST raid_state_function_test 00:07:27.537 ************************************ 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72255 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72255' 00:07:27.537 Process raid pid: 72255 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72255 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 72255 ']' 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.537 03:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.537 [2024-11-18 03:07:31.093614] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:27.537 [2024-11-18 03:07:31.093768] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.797 [2024-11-18 03:07:31.256074] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.797 [2024-11-18 03:07:31.307247] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.797 [2024-11-18 03:07:31.349738] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.797 [2024-11-18 03:07:31.349774] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.366 03:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:28.366 03:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:28.366 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:28.366 03:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.366 03:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.626 [2024-11-18 03:07:31.947158] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:28.626 [2024-11-18 03:07:31.947214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:28.626 [2024-11-18 03:07:31.947227] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:28.626 [2024-11-18 03:07:31.947237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:28.626 03:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.626 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:28.626 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.626 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:28.626 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:28.626 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.626 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.626 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.626 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.626 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.626 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.626 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.626 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.626 03:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.626 03:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.626 03:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.626 03:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.626 "name": "Existed_Raid", 00:07:28.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.626 "strip_size_kb": 64, 00:07:28.626 "state": "configuring", 00:07:28.626 "raid_level": "raid0", 00:07:28.626 "superblock": false, 00:07:28.626 "num_base_bdevs": 2, 00:07:28.626 "num_base_bdevs_discovered": 0, 00:07:28.626 "num_base_bdevs_operational": 2, 00:07:28.626 "base_bdevs_list": [ 00:07:28.626 { 00:07:28.626 "name": "BaseBdev1", 00:07:28.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.626 "is_configured": false, 00:07:28.626 "data_offset": 0, 00:07:28.626 "data_size": 0 00:07:28.626 }, 00:07:28.626 { 00:07:28.626 "name": "BaseBdev2", 00:07:28.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.626 "is_configured": false, 00:07:28.626 "data_offset": 0, 00:07:28.626 "data_size": 0 00:07:28.626 } 00:07:28.626 ] 00:07:28.626 }' 00:07:28.626 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.626 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.887 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:28.887 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.887 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.887 [2024-11-18 03:07:32.366404] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:28.887 [2024-11-18 03:07:32.366507] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:28.887 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.887 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:28.887 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.887 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.887 [2024-11-18 03:07:32.374422] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:28.887 [2024-11-18 03:07:32.374508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:28.887 [2024-11-18 03:07:32.374536] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:28.887 [2024-11-18 03:07:32.374559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:28.887 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.887 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:28.887 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.887 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.887 [2024-11-18 03:07:32.391303] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:28.887 BaseBdev1 00:07:28.887 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.887 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:28.887 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:28.887 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:28.887 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:28.887 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:28.887 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:28.887 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:28.887 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.887 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.887 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.887 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:28.887 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.887 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.887 [ 00:07:28.887 { 00:07:28.887 "name": "BaseBdev1", 00:07:28.887 "aliases": [ 00:07:28.887 "369585a4-5944-4cb5-917d-16fbbcec41d4" 00:07:28.887 ], 00:07:28.887 "product_name": "Malloc disk", 00:07:28.887 "block_size": 512, 00:07:28.887 "num_blocks": 65536, 00:07:28.887 "uuid": "369585a4-5944-4cb5-917d-16fbbcec41d4", 00:07:28.887 "assigned_rate_limits": { 00:07:28.887 "rw_ios_per_sec": 0, 00:07:28.887 "rw_mbytes_per_sec": 0, 00:07:28.887 "r_mbytes_per_sec": 0, 00:07:28.887 "w_mbytes_per_sec": 0 00:07:28.887 }, 00:07:28.887 "claimed": true, 00:07:28.887 "claim_type": "exclusive_write", 00:07:28.887 "zoned": false, 00:07:28.887 "supported_io_types": { 00:07:28.887 "read": true, 00:07:28.887 "write": true, 00:07:28.887 "unmap": true, 00:07:28.887 "flush": true, 00:07:28.887 "reset": true, 00:07:28.887 "nvme_admin": false, 00:07:28.887 "nvme_io": false, 00:07:28.887 "nvme_io_md": false, 00:07:28.887 "write_zeroes": true, 00:07:28.887 "zcopy": true, 00:07:28.887 "get_zone_info": false, 00:07:28.887 "zone_management": false, 00:07:28.887 "zone_append": false, 00:07:28.887 "compare": false, 00:07:28.887 "compare_and_write": false, 00:07:28.887 "abort": true, 00:07:28.887 "seek_hole": false, 00:07:28.887 "seek_data": false, 00:07:28.887 "copy": true, 00:07:28.887 "nvme_iov_md": false 00:07:28.887 }, 00:07:28.887 "memory_domains": [ 00:07:28.887 { 00:07:28.887 "dma_device_id": "system", 00:07:28.887 "dma_device_type": 1 00:07:28.887 }, 00:07:28.887 { 00:07:28.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.887 "dma_device_type": 2 00:07:28.887 } 00:07:28.887 ], 00:07:28.887 "driver_specific": {} 00:07:28.887 } 00:07:28.887 ] 00:07:28.887 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.887 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:28.888 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:28.888 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.888 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:28.888 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:28.888 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.888 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.888 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.888 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.888 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.888 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.888 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.888 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.888 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.888 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.888 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.148 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.148 "name": "Existed_Raid", 00:07:29.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.148 "strip_size_kb": 64, 00:07:29.148 "state": "configuring", 00:07:29.148 "raid_level": "raid0", 00:07:29.148 "superblock": false, 00:07:29.148 "num_base_bdevs": 2, 00:07:29.148 "num_base_bdevs_discovered": 1, 00:07:29.148 "num_base_bdevs_operational": 2, 00:07:29.148 "base_bdevs_list": [ 00:07:29.148 { 00:07:29.148 "name": "BaseBdev1", 00:07:29.148 "uuid": "369585a4-5944-4cb5-917d-16fbbcec41d4", 00:07:29.148 "is_configured": true, 00:07:29.148 "data_offset": 0, 00:07:29.148 "data_size": 65536 00:07:29.148 }, 00:07:29.148 { 00:07:29.148 "name": "BaseBdev2", 00:07:29.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.148 "is_configured": false, 00:07:29.148 "data_offset": 0, 00:07:29.148 "data_size": 0 00:07:29.148 } 00:07:29.148 ] 00:07:29.148 }' 00:07:29.148 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.148 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.411 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:29.411 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.411 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.411 [2024-11-18 03:07:32.846594] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:29.411 [2024-11-18 03:07:32.846722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:29.411 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.411 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:29.411 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.411 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.411 [2024-11-18 03:07:32.858600] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:29.411 [2024-11-18 03:07:32.860513] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:29.411 [2024-11-18 03:07:32.860557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:29.411 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.411 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:29.411 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:29.411 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:29.411 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.411 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:29.411 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:29.411 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.411 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.411 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.411 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.411 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.411 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.411 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.411 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.411 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.412 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.412 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.412 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.412 "name": "Existed_Raid", 00:07:29.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.412 "strip_size_kb": 64, 00:07:29.412 "state": "configuring", 00:07:29.412 "raid_level": "raid0", 00:07:29.412 "superblock": false, 00:07:29.412 "num_base_bdevs": 2, 00:07:29.412 "num_base_bdevs_discovered": 1, 00:07:29.412 "num_base_bdevs_operational": 2, 00:07:29.412 "base_bdevs_list": [ 00:07:29.412 { 00:07:29.412 "name": "BaseBdev1", 00:07:29.412 "uuid": "369585a4-5944-4cb5-917d-16fbbcec41d4", 00:07:29.412 "is_configured": true, 00:07:29.412 "data_offset": 0, 00:07:29.412 "data_size": 65536 00:07:29.412 }, 00:07:29.412 { 00:07:29.412 "name": "BaseBdev2", 00:07:29.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.412 "is_configured": false, 00:07:29.412 "data_offset": 0, 00:07:29.412 "data_size": 0 00:07:29.412 } 00:07:29.412 ] 00:07:29.412 }' 00:07:29.412 03:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.412 03:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.982 [2024-11-18 03:07:33.283448] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:29.982 [2024-11-18 03:07:33.283502] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:29.982 [2024-11-18 03:07:33.283513] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:29.982 [2024-11-18 03:07:33.283803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:29.982 [2024-11-18 03:07:33.283995] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:29.982 [2024-11-18 03:07:33.284018] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:29.982 [2024-11-18 03:07:33.284307] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.982 BaseBdev2 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.982 [ 00:07:29.982 { 00:07:29.982 "name": "BaseBdev2", 00:07:29.982 "aliases": [ 00:07:29.982 "a600d587-286b-45bc-b815-69ef4152505c" 00:07:29.982 ], 00:07:29.982 "product_name": "Malloc disk", 00:07:29.982 "block_size": 512, 00:07:29.982 "num_blocks": 65536, 00:07:29.982 "uuid": "a600d587-286b-45bc-b815-69ef4152505c", 00:07:29.982 "assigned_rate_limits": { 00:07:29.982 "rw_ios_per_sec": 0, 00:07:29.982 "rw_mbytes_per_sec": 0, 00:07:29.982 "r_mbytes_per_sec": 0, 00:07:29.982 "w_mbytes_per_sec": 0 00:07:29.982 }, 00:07:29.982 "claimed": true, 00:07:29.982 "claim_type": "exclusive_write", 00:07:29.982 "zoned": false, 00:07:29.982 "supported_io_types": { 00:07:29.982 "read": true, 00:07:29.982 "write": true, 00:07:29.982 "unmap": true, 00:07:29.982 "flush": true, 00:07:29.982 "reset": true, 00:07:29.982 "nvme_admin": false, 00:07:29.982 "nvme_io": false, 00:07:29.982 "nvme_io_md": false, 00:07:29.982 "write_zeroes": true, 00:07:29.982 "zcopy": true, 00:07:29.982 "get_zone_info": false, 00:07:29.982 "zone_management": false, 00:07:29.982 "zone_append": false, 00:07:29.982 "compare": false, 00:07:29.982 "compare_and_write": false, 00:07:29.982 "abort": true, 00:07:29.982 "seek_hole": false, 00:07:29.982 "seek_data": false, 00:07:29.982 "copy": true, 00:07:29.982 "nvme_iov_md": false 00:07:29.982 }, 00:07:29.982 "memory_domains": [ 00:07:29.982 { 00:07:29.982 "dma_device_id": "system", 00:07:29.982 "dma_device_type": 1 00:07:29.982 }, 00:07:29.982 { 00:07:29.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.982 "dma_device_type": 2 00:07:29.982 } 00:07:29.982 ], 00:07:29.982 "driver_specific": {} 00:07:29.982 } 00:07:29.982 ] 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.982 "name": "Existed_Raid", 00:07:29.982 "uuid": "185d1cba-083e-4d74-a4f2-808da75f9fbe", 00:07:29.982 "strip_size_kb": 64, 00:07:29.982 "state": "online", 00:07:29.982 "raid_level": "raid0", 00:07:29.982 "superblock": false, 00:07:29.982 "num_base_bdevs": 2, 00:07:29.982 "num_base_bdevs_discovered": 2, 00:07:29.982 "num_base_bdevs_operational": 2, 00:07:29.982 "base_bdevs_list": [ 00:07:29.982 { 00:07:29.982 "name": "BaseBdev1", 00:07:29.982 "uuid": "369585a4-5944-4cb5-917d-16fbbcec41d4", 00:07:29.982 "is_configured": true, 00:07:29.982 "data_offset": 0, 00:07:29.982 "data_size": 65536 00:07:29.982 }, 00:07:29.982 { 00:07:29.982 "name": "BaseBdev2", 00:07:29.982 "uuid": "a600d587-286b-45bc-b815-69ef4152505c", 00:07:29.982 "is_configured": true, 00:07:29.982 "data_offset": 0, 00:07:29.982 "data_size": 65536 00:07:29.982 } 00:07:29.982 ] 00:07:29.982 }' 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.982 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.253 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:30.253 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:30.253 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:30.253 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:30.253 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:30.253 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:30.253 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:30.253 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.253 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:30.253 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.253 [2024-11-18 03:07:33.739050] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:30.253 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.253 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:30.253 "name": "Existed_Raid", 00:07:30.253 "aliases": [ 00:07:30.253 "185d1cba-083e-4d74-a4f2-808da75f9fbe" 00:07:30.253 ], 00:07:30.253 "product_name": "Raid Volume", 00:07:30.253 "block_size": 512, 00:07:30.253 "num_blocks": 131072, 00:07:30.253 "uuid": "185d1cba-083e-4d74-a4f2-808da75f9fbe", 00:07:30.253 "assigned_rate_limits": { 00:07:30.253 "rw_ios_per_sec": 0, 00:07:30.253 "rw_mbytes_per_sec": 0, 00:07:30.253 "r_mbytes_per_sec": 0, 00:07:30.253 "w_mbytes_per_sec": 0 00:07:30.253 }, 00:07:30.253 "claimed": false, 00:07:30.253 "zoned": false, 00:07:30.253 "supported_io_types": { 00:07:30.253 "read": true, 00:07:30.253 "write": true, 00:07:30.253 "unmap": true, 00:07:30.253 "flush": true, 00:07:30.253 "reset": true, 00:07:30.253 "nvme_admin": false, 00:07:30.253 "nvme_io": false, 00:07:30.254 "nvme_io_md": false, 00:07:30.254 "write_zeroes": true, 00:07:30.254 "zcopy": false, 00:07:30.254 "get_zone_info": false, 00:07:30.254 "zone_management": false, 00:07:30.254 "zone_append": false, 00:07:30.254 "compare": false, 00:07:30.254 "compare_and_write": false, 00:07:30.254 "abort": false, 00:07:30.254 "seek_hole": false, 00:07:30.254 "seek_data": false, 00:07:30.254 "copy": false, 00:07:30.254 "nvme_iov_md": false 00:07:30.254 }, 00:07:30.254 "memory_domains": [ 00:07:30.254 { 00:07:30.254 "dma_device_id": "system", 00:07:30.254 "dma_device_type": 1 00:07:30.254 }, 00:07:30.254 { 00:07:30.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.254 "dma_device_type": 2 00:07:30.254 }, 00:07:30.254 { 00:07:30.254 "dma_device_id": "system", 00:07:30.254 "dma_device_type": 1 00:07:30.254 }, 00:07:30.254 { 00:07:30.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.254 "dma_device_type": 2 00:07:30.254 } 00:07:30.254 ], 00:07:30.254 "driver_specific": { 00:07:30.254 "raid": { 00:07:30.254 "uuid": "185d1cba-083e-4d74-a4f2-808da75f9fbe", 00:07:30.254 "strip_size_kb": 64, 00:07:30.254 "state": "online", 00:07:30.254 "raid_level": "raid0", 00:07:30.254 "superblock": false, 00:07:30.254 "num_base_bdevs": 2, 00:07:30.254 "num_base_bdevs_discovered": 2, 00:07:30.254 "num_base_bdevs_operational": 2, 00:07:30.254 "base_bdevs_list": [ 00:07:30.254 { 00:07:30.254 "name": "BaseBdev1", 00:07:30.254 "uuid": "369585a4-5944-4cb5-917d-16fbbcec41d4", 00:07:30.254 "is_configured": true, 00:07:30.254 "data_offset": 0, 00:07:30.254 "data_size": 65536 00:07:30.254 }, 00:07:30.254 { 00:07:30.254 "name": "BaseBdev2", 00:07:30.254 "uuid": "a600d587-286b-45bc-b815-69ef4152505c", 00:07:30.254 "is_configured": true, 00:07:30.254 "data_offset": 0, 00:07:30.254 "data_size": 65536 00:07:30.254 } 00:07:30.254 ] 00:07:30.254 } 00:07:30.254 } 00:07:30.254 }' 00:07:30.254 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:30.254 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:30.254 BaseBdev2' 00:07:30.254 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.514 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:30.514 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:30.514 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:30.514 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.514 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.514 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.514 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.514 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:30.514 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:30.514 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:30.514 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:30.514 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.514 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.514 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.514 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.514 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:30.514 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:30.514 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:30.514 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.514 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.514 [2024-11-18 03:07:33.966418] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:30.514 [2024-11-18 03:07:33.966453] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:30.514 [2024-11-18 03:07:33.966521] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:30.514 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.514 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:30.515 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:30.515 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:30.515 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:30.515 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:30.515 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:30.515 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:30.515 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:30.515 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:30.515 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.515 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:30.515 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.515 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.515 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.515 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.515 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.515 03:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:30.515 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.515 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.515 03:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.515 03:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.515 "name": "Existed_Raid", 00:07:30.515 "uuid": "185d1cba-083e-4d74-a4f2-808da75f9fbe", 00:07:30.515 "strip_size_kb": 64, 00:07:30.515 "state": "offline", 00:07:30.515 "raid_level": "raid0", 00:07:30.515 "superblock": false, 00:07:30.515 "num_base_bdevs": 2, 00:07:30.515 "num_base_bdevs_discovered": 1, 00:07:30.515 "num_base_bdevs_operational": 1, 00:07:30.515 "base_bdevs_list": [ 00:07:30.515 { 00:07:30.515 "name": null, 00:07:30.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.515 "is_configured": false, 00:07:30.515 "data_offset": 0, 00:07:30.515 "data_size": 65536 00:07:30.515 }, 00:07:30.515 { 00:07:30.515 "name": "BaseBdev2", 00:07:30.515 "uuid": "a600d587-286b-45bc-b815-69ef4152505c", 00:07:30.515 "is_configured": true, 00:07:30.515 "data_offset": 0, 00:07:30.515 "data_size": 65536 00:07:30.515 } 00:07:30.515 ] 00:07:30.515 }' 00:07:30.515 03:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.515 03:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.774 03:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:30.774 03:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:30.774 03:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:30.774 03:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.775 03:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.775 03:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.034 03:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.034 03:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:31.034 03:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:31.034 03:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:31.034 03:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.034 03:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.034 [2024-11-18 03:07:34.377085] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:31.034 [2024-11-18 03:07:34.377144] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:31.034 03:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.034 03:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:31.034 03:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:31.034 03:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:31.034 03:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.034 03:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.034 03:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.034 03:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.034 03:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:31.034 03:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:31.034 03:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:31.034 03:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72255 00:07:31.034 03:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 72255 ']' 00:07:31.034 03:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 72255 00:07:31.034 03:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:31.034 03:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:31.034 03:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72255 00:07:31.034 killing process with pid 72255 00:07:31.034 03:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:31.034 03:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:31.034 03:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72255' 00:07:31.034 03:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 72255 00:07:31.035 [2024-11-18 03:07:34.478529] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:31.035 03:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 72255 00:07:31.035 [2024-11-18 03:07:34.479556] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:31.295 ************************************ 00:07:31.295 END TEST raid_state_function_test 00:07:31.295 ************************************ 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:31.295 00:07:31.295 real 0m3.720s 00:07:31.295 user 0m5.786s 00:07:31.295 sys 0m0.763s 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.295 03:07:34 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:31.295 03:07:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:31.295 03:07:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.295 03:07:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:31.295 ************************************ 00:07:31.295 START TEST raid_state_function_test_sb 00:07:31.295 ************************************ 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72491 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72491' 00:07:31.295 Process raid pid: 72491 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72491 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 72491 ']' 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:31.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:31.295 03:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.555 [2024-11-18 03:07:34.878936] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:31.555 [2024-11-18 03:07:34.879089] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.555 [2024-11-18 03:07:35.041696] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.555 [2024-11-18 03:07:35.091813] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.814 [2024-11-18 03:07:35.133778] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.814 [2024-11-18 03:07:35.133819] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.383 03:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.383 03:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:32.383 03:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:32.384 03:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.384 03:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.384 [2024-11-18 03:07:35.715053] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:32.384 [2024-11-18 03:07:35.715121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:32.384 [2024-11-18 03:07:35.715134] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:32.384 [2024-11-18 03:07:35.715144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:32.384 03:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.384 03:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:32.384 03:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.384 03:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.384 03:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:32.384 03:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.384 03:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.384 03:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.384 03:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.384 03:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.384 03:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.384 03:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.384 03:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.384 03:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.384 03:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.384 03:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.384 03:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.384 "name": "Existed_Raid", 00:07:32.384 "uuid": "0830d015-b9e2-4657-a5c6-13a4ca6193a6", 00:07:32.384 "strip_size_kb": 64, 00:07:32.384 "state": "configuring", 00:07:32.384 "raid_level": "raid0", 00:07:32.384 "superblock": true, 00:07:32.384 "num_base_bdevs": 2, 00:07:32.384 "num_base_bdevs_discovered": 0, 00:07:32.384 "num_base_bdevs_operational": 2, 00:07:32.384 "base_bdevs_list": [ 00:07:32.384 { 00:07:32.384 "name": "BaseBdev1", 00:07:32.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.384 "is_configured": false, 00:07:32.384 "data_offset": 0, 00:07:32.384 "data_size": 0 00:07:32.384 }, 00:07:32.384 { 00:07:32.384 "name": "BaseBdev2", 00:07:32.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.384 "is_configured": false, 00:07:32.384 "data_offset": 0, 00:07:32.384 "data_size": 0 00:07:32.384 } 00:07:32.384 ] 00:07:32.384 }' 00:07:32.384 03:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.384 03:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.644 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:32.644 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.644 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.644 [2024-11-18 03:07:36.162186] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:32.644 [2024-11-18 03:07:36.162238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:32.644 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.644 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:32.644 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.644 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.644 [2024-11-18 03:07:36.174207] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:32.644 [2024-11-18 03:07:36.174254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:32.644 [2024-11-18 03:07:36.174263] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:32.644 [2024-11-18 03:07:36.174272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:32.644 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.644 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:32.644 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.644 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.644 [2024-11-18 03:07:36.195207] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:32.644 BaseBdev1 00:07:32.644 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.644 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:32.644 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:32.644 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:32.644 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:32.644 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:32.644 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:32.644 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:32.644 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.644 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.644 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.644 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:32.644 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.644 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.904 [ 00:07:32.904 { 00:07:32.904 "name": "BaseBdev1", 00:07:32.904 "aliases": [ 00:07:32.904 "398c6ed8-ab06-4194-be85-5ee717ca297a" 00:07:32.904 ], 00:07:32.904 "product_name": "Malloc disk", 00:07:32.904 "block_size": 512, 00:07:32.904 "num_blocks": 65536, 00:07:32.904 "uuid": "398c6ed8-ab06-4194-be85-5ee717ca297a", 00:07:32.904 "assigned_rate_limits": { 00:07:32.904 "rw_ios_per_sec": 0, 00:07:32.904 "rw_mbytes_per_sec": 0, 00:07:32.904 "r_mbytes_per_sec": 0, 00:07:32.904 "w_mbytes_per_sec": 0 00:07:32.904 }, 00:07:32.904 "claimed": true, 00:07:32.904 "claim_type": "exclusive_write", 00:07:32.904 "zoned": false, 00:07:32.904 "supported_io_types": { 00:07:32.904 "read": true, 00:07:32.904 "write": true, 00:07:32.904 "unmap": true, 00:07:32.904 "flush": true, 00:07:32.904 "reset": true, 00:07:32.904 "nvme_admin": false, 00:07:32.904 "nvme_io": false, 00:07:32.904 "nvme_io_md": false, 00:07:32.904 "write_zeroes": true, 00:07:32.904 "zcopy": true, 00:07:32.904 "get_zone_info": false, 00:07:32.904 "zone_management": false, 00:07:32.904 "zone_append": false, 00:07:32.904 "compare": false, 00:07:32.904 "compare_and_write": false, 00:07:32.904 "abort": true, 00:07:32.904 "seek_hole": false, 00:07:32.904 "seek_data": false, 00:07:32.904 "copy": true, 00:07:32.904 "nvme_iov_md": false 00:07:32.904 }, 00:07:32.904 "memory_domains": [ 00:07:32.904 { 00:07:32.904 "dma_device_id": "system", 00:07:32.904 "dma_device_type": 1 00:07:32.904 }, 00:07:32.904 { 00:07:32.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.904 "dma_device_type": 2 00:07:32.904 } 00:07:32.904 ], 00:07:32.904 "driver_specific": {} 00:07:32.904 } 00:07:32.904 ] 00:07:32.904 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.904 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:32.904 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:32.904 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.904 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.904 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:32.904 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.904 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.904 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.904 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.904 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.904 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.904 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.904 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.904 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.904 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.904 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.904 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.904 "name": "Existed_Raid", 00:07:32.904 "uuid": "c28ebc82-bed4-46ae-9c24-3f9d789a355f", 00:07:32.904 "strip_size_kb": 64, 00:07:32.904 "state": "configuring", 00:07:32.904 "raid_level": "raid0", 00:07:32.904 "superblock": true, 00:07:32.904 "num_base_bdevs": 2, 00:07:32.904 "num_base_bdevs_discovered": 1, 00:07:32.904 "num_base_bdevs_operational": 2, 00:07:32.904 "base_bdevs_list": [ 00:07:32.904 { 00:07:32.904 "name": "BaseBdev1", 00:07:32.904 "uuid": "398c6ed8-ab06-4194-be85-5ee717ca297a", 00:07:32.904 "is_configured": true, 00:07:32.904 "data_offset": 2048, 00:07:32.904 "data_size": 63488 00:07:32.904 }, 00:07:32.904 { 00:07:32.904 "name": "BaseBdev2", 00:07:32.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.904 "is_configured": false, 00:07:32.904 "data_offset": 0, 00:07:32.904 "data_size": 0 00:07:32.904 } 00:07:32.904 ] 00:07:32.904 }' 00:07:32.904 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.904 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.165 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:33.165 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.165 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.165 [2024-11-18 03:07:36.606609] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:33.165 [2024-11-18 03:07:36.606672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:33.165 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.165 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:33.165 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.165 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.165 [2024-11-18 03:07:36.614613] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:33.165 [2024-11-18 03:07:36.616632] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:33.165 [2024-11-18 03:07:36.616681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:33.165 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.165 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:33.165 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:33.165 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:33.165 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.165 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:33.165 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:33.166 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.166 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.166 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.166 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.166 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.166 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.166 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.166 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.166 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.166 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.166 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.166 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.166 "name": "Existed_Raid", 00:07:33.166 "uuid": "a41c9104-dcab-4f00-8863-512bcc8a72c9", 00:07:33.166 "strip_size_kb": 64, 00:07:33.166 "state": "configuring", 00:07:33.166 "raid_level": "raid0", 00:07:33.166 "superblock": true, 00:07:33.166 "num_base_bdevs": 2, 00:07:33.166 "num_base_bdevs_discovered": 1, 00:07:33.166 "num_base_bdevs_operational": 2, 00:07:33.166 "base_bdevs_list": [ 00:07:33.166 { 00:07:33.166 "name": "BaseBdev1", 00:07:33.166 "uuid": "398c6ed8-ab06-4194-be85-5ee717ca297a", 00:07:33.166 "is_configured": true, 00:07:33.166 "data_offset": 2048, 00:07:33.166 "data_size": 63488 00:07:33.166 }, 00:07:33.166 { 00:07:33.166 "name": "BaseBdev2", 00:07:33.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.166 "is_configured": false, 00:07:33.166 "data_offset": 0, 00:07:33.166 "data_size": 0 00:07:33.166 } 00:07:33.166 ] 00:07:33.166 }' 00:07:33.166 03:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.166 03:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.735 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:33.735 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.735 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.735 [2024-11-18 03:07:37.025818] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:33.735 [2024-11-18 03:07:37.026111] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:33.735 [2024-11-18 03:07:37.026140] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:33.735 [2024-11-18 03:07:37.026523] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:33.735 BaseBdev2 00:07:33.735 [2024-11-18 03:07:37.026717] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:33.735 [2024-11-18 03:07:37.026738] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:33.735 [2024-11-18 03:07:37.026877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.735 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.735 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:33.735 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:33.735 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:33.735 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:33.735 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:33.735 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:33.735 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:33.735 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.735 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.735 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.735 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:33.735 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.735 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.735 [ 00:07:33.735 { 00:07:33.735 "name": "BaseBdev2", 00:07:33.735 "aliases": [ 00:07:33.735 "8bee8a10-e600-4978-9ea8-687348c81f62" 00:07:33.735 ], 00:07:33.735 "product_name": "Malloc disk", 00:07:33.735 "block_size": 512, 00:07:33.735 "num_blocks": 65536, 00:07:33.735 "uuid": "8bee8a10-e600-4978-9ea8-687348c81f62", 00:07:33.735 "assigned_rate_limits": { 00:07:33.735 "rw_ios_per_sec": 0, 00:07:33.735 "rw_mbytes_per_sec": 0, 00:07:33.735 "r_mbytes_per_sec": 0, 00:07:33.735 "w_mbytes_per_sec": 0 00:07:33.735 }, 00:07:33.735 "claimed": true, 00:07:33.735 "claim_type": "exclusive_write", 00:07:33.735 "zoned": false, 00:07:33.735 "supported_io_types": { 00:07:33.735 "read": true, 00:07:33.735 "write": true, 00:07:33.735 "unmap": true, 00:07:33.735 "flush": true, 00:07:33.735 "reset": true, 00:07:33.735 "nvme_admin": false, 00:07:33.735 "nvme_io": false, 00:07:33.735 "nvme_io_md": false, 00:07:33.735 "write_zeroes": true, 00:07:33.735 "zcopy": true, 00:07:33.735 "get_zone_info": false, 00:07:33.735 "zone_management": false, 00:07:33.735 "zone_append": false, 00:07:33.735 "compare": false, 00:07:33.735 "compare_and_write": false, 00:07:33.735 "abort": true, 00:07:33.735 "seek_hole": false, 00:07:33.735 "seek_data": false, 00:07:33.735 "copy": true, 00:07:33.735 "nvme_iov_md": false 00:07:33.735 }, 00:07:33.736 "memory_domains": [ 00:07:33.736 { 00:07:33.736 "dma_device_id": "system", 00:07:33.736 "dma_device_type": 1 00:07:33.736 }, 00:07:33.736 { 00:07:33.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.736 "dma_device_type": 2 00:07:33.736 } 00:07:33.736 ], 00:07:33.736 "driver_specific": {} 00:07:33.736 } 00:07:33.736 ] 00:07:33.736 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.736 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:33.736 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:33.736 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:33.736 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:33.736 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.736 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.736 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:33.736 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.736 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.736 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.736 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.736 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.736 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.736 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.736 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.736 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.736 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.736 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.736 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.736 "name": "Existed_Raid", 00:07:33.736 "uuid": "a41c9104-dcab-4f00-8863-512bcc8a72c9", 00:07:33.736 "strip_size_kb": 64, 00:07:33.736 "state": "online", 00:07:33.736 "raid_level": "raid0", 00:07:33.736 "superblock": true, 00:07:33.736 "num_base_bdevs": 2, 00:07:33.736 "num_base_bdevs_discovered": 2, 00:07:33.736 "num_base_bdevs_operational": 2, 00:07:33.736 "base_bdevs_list": [ 00:07:33.736 { 00:07:33.736 "name": "BaseBdev1", 00:07:33.736 "uuid": "398c6ed8-ab06-4194-be85-5ee717ca297a", 00:07:33.736 "is_configured": true, 00:07:33.736 "data_offset": 2048, 00:07:33.736 "data_size": 63488 00:07:33.736 }, 00:07:33.736 { 00:07:33.736 "name": "BaseBdev2", 00:07:33.736 "uuid": "8bee8a10-e600-4978-9ea8-687348c81f62", 00:07:33.736 "is_configured": true, 00:07:33.736 "data_offset": 2048, 00:07:33.736 "data_size": 63488 00:07:33.736 } 00:07:33.736 ] 00:07:33.736 }' 00:07:33.736 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.736 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.996 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:33.996 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:33.996 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:33.996 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:33.996 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:33.996 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:33.996 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:33.996 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.996 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.996 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:33.996 [2024-11-18 03:07:37.517365] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.996 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.996 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:33.996 "name": "Existed_Raid", 00:07:33.996 "aliases": [ 00:07:33.996 "a41c9104-dcab-4f00-8863-512bcc8a72c9" 00:07:33.996 ], 00:07:33.996 "product_name": "Raid Volume", 00:07:33.996 "block_size": 512, 00:07:33.996 "num_blocks": 126976, 00:07:33.996 "uuid": "a41c9104-dcab-4f00-8863-512bcc8a72c9", 00:07:33.996 "assigned_rate_limits": { 00:07:33.996 "rw_ios_per_sec": 0, 00:07:33.996 "rw_mbytes_per_sec": 0, 00:07:33.996 "r_mbytes_per_sec": 0, 00:07:33.996 "w_mbytes_per_sec": 0 00:07:33.996 }, 00:07:33.996 "claimed": false, 00:07:33.996 "zoned": false, 00:07:33.996 "supported_io_types": { 00:07:33.996 "read": true, 00:07:33.996 "write": true, 00:07:33.996 "unmap": true, 00:07:33.996 "flush": true, 00:07:33.996 "reset": true, 00:07:33.996 "nvme_admin": false, 00:07:33.996 "nvme_io": false, 00:07:33.996 "nvme_io_md": false, 00:07:33.996 "write_zeroes": true, 00:07:33.996 "zcopy": false, 00:07:33.996 "get_zone_info": false, 00:07:33.996 "zone_management": false, 00:07:33.996 "zone_append": false, 00:07:33.996 "compare": false, 00:07:33.996 "compare_and_write": false, 00:07:33.996 "abort": false, 00:07:33.996 "seek_hole": false, 00:07:33.996 "seek_data": false, 00:07:33.996 "copy": false, 00:07:33.996 "nvme_iov_md": false 00:07:33.996 }, 00:07:33.996 "memory_domains": [ 00:07:33.996 { 00:07:33.996 "dma_device_id": "system", 00:07:33.996 "dma_device_type": 1 00:07:33.996 }, 00:07:33.996 { 00:07:33.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.996 "dma_device_type": 2 00:07:33.996 }, 00:07:33.996 { 00:07:33.996 "dma_device_id": "system", 00:07:33.996 "dma_device_type": 1 00:07:33.996 }, 00:07:33.996 { 00:07:33.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.996 "dma_device_type": 2 00:07:33.996 } 00:07:33.996 ], 00:07:33.996 "driver_specific": { 00:07:33.996 "raid": { 00:07:33.996 "uuid": "a41c9104-dcab-4f00-8863-512bcc8a72c9", 00:07:33.996 "strip_size_kb": 64, 00:07:33.996 "state": "online", 00:07:33.996 "raid_level": "raid0", 00:07:33.996 "superblock": true, 00:07:33.996 "num_base_bdevs": 2, 00:07:33.996 "num_base_bdevs_discovered": 2, 00:07:33.996 "num_base_bdevs_operational": 2, 00:07:33.996 "base_bdevs_list": [ 00:07:33.996 { 00:07:33.996 "name": "BaseBdev1", 00:07:33.996 "uuid": "398c6ed8-ab06-4194-be85-5ee717ca297a", 00:07:33.996 "is_configured": true, 00:07:33.996 "data_offset": 2048, 00:07:33.996 "data_size": 63488 00:07:33.996 }, 00:07:33.996 { 00:07:33.996 "name": "BaseBdev2", 00:07:33.996 "uuid": "8bee8a10-e600-4978-9ea8-687348c81f62", 00:07:33.996 "is_configured": true, 00:07:33.996 "data_offset": 2048, 00:07:33.996 "data_size": 63488 00:07:33.996 } 00:07:33.996 ] 00:07:33.996 } 00:07:33.996 } 00:07:33.996 }' 00:07:33.996 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:34.257 BaseBdev2' 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.257 [2024-11-18 03:07:37.776641] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:34.257 [2024-11-18 03:07:37.776679] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:34.257 [2024-11-18 03:07:37.776737] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.257 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.517 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.517 "name": "Existed_Raid", 00:07:34.517 "uuid": "a41c9104-dcab-4f00-8863-512bcc8a72c9", 00:07:34.517 "strip_size_kb": 64, 00:07:34.517 "state": "offline", 00:07:34.517 "raid_level": "raid0", 00:07:34.517 "superblock": true, 00:07:34.517 "num_base_bdevs": 2, 00:07:34.517 "num_base_bdevs_discovered": 1, 00:07:34.517 "num_base_bdevs_operational": 1, 00:07:34.517 "base_bdevs_list": [ 00:07:34.517 { 00:07:34.517 "name": null, 00:07:34.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.517 "is_configured": false, 00:07:34.517 "data_offset": 0, 00:07:34.517 "data_size": 63488 00:07:34.517 }, 00:07:34.517 { 00:07:34.517 "name": "BaseBdev2", 00:07:34.517 "uuid": "8bee8a10-e600-4978-9ea8-687348c81f62", 00:07:34.517 "is_configured": true, 00:07:34.517 "data_offset": 2048, 00:07:34.517 "data_size": 63488 00:07:34.517 } 00:07:34.517 ] 00:07:34.517 }' 00:07:34.517 03:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.517 03:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.777 [2024-11-18 03:07:38.231343] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:34.777 [2024-11-18 03:07:38.231413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72491 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 72491 ']' 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 72491 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72491 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:34.777 killing process with pid 72491 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72491' 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 72491 00:07:34.777 [2024-11-18 03:07:38.336118] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:34.777 03:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 72491 00:07:34.777 [2024-11-18 03:07:38.337181] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:35.036 03:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:35.036 00:07:35.036 real 0m3.792s 00:07:35.036 user 0m5.923s 00:07:35.036 sys 0m0.767s 00:07:35.036 03:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.036 03:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.036 ************************************ 00:07:35.036 END TEST raid_state_function_test_sb 00:07:35.036 ************************************ 00:07:35.295 03:07:38 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:35.295 03:07:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:35.295 03:07:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.295 03:07:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:35.295 ************************************ 00:07:35.295 START TEST raid_superblock_test 00:07:35.295 ************************************ 00:07:35.295 03:07:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:07:35.295 03:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:35.295 03:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:35.295 03:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:35.295 03:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:35.295 03:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:35.295 03:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:35.295 03:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:35.295 03:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:35.295 03:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:35.295 03:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:35.295 03:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:35.295 03:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:35.295 03:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:35.295 03:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:35.295 03:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:35.295 03:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:35.295 03:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72731 00:07:35.295 03:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:35.295 03:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72731 00:07:35.295 03:07:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72731 ']' 00:07:35.295 03:07:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.295 03:07:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.295 03:07:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.295 03:07:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.295 03:07:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.295 [2024-11-18 03:07:38.729429] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:35.295 [2024-11-18 03:07:38.729654] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72731 ] 00:07:35.554 [2024-11-18 03:07:38.889683] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.554 [2024-11-18 03:07:38.940034] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.554 [2024-11-18 03:07:38.982134] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.554 [2024-11-18 03:07:38.982251] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.172 03:07:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.172 03:07:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:36.172 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:36.172 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:36.172 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:36.172 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:36.172 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:36.172 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:36.172 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:36.172 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:36.172 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:36.172 03:07:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.172 03:07:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.172 malloc1 00:07:36.172 03:07:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.173 [2024-11-18 03:07:39.592343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:36.173 [2024-11-18 03:07:39.592449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:36.173 [2024-11-18 03:07:39.592472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:36.173 [2024-11-18 03:07:39.592495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:36.173 [2024-11-18 03:07:39.594652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:36.173 [2024-11-18 03:07:39.594768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:36.173 pt1 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.173 malloc2 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.173 [2024-11-18 03:07:39.634597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:36.173 [2024-11-18 03:07:39.634709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:36.173 [2024-11-18 03:07:39.634747] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:36.173 [2024-11-18 03:07:39.634762] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:36.173 [2024-11-18 03:07:39.637096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:36.173 [2024-11-18 03:07:39.637137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:36.173 pt2 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.173 [2024-11-18 03:07:39.646620] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:36.173 [2024-11-18 03:07:39.648564] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:36.173 [2024-11-18 03:07:39.648745] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:36.173 [2024-11-18 03:07:39.648796] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:36.173 [2024-11-18 03:07:39.649107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:36.173 [2024-11-18 03:07:39.649281] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:36.173 [2024-11-18 03:07:39.649328] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:36.173 [2024-11-18 03:07:39.649520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.173 "name": "raid_bdev1", 00:07:36.173 "uuid": "b36296e4-074d-4d69-b064-ba1aea3297ee", 00:07:36.173 "strip_size_kb": 64, 00:07:36.173 "state": "online", 00:07:36.173 "raid_level": "raid0", 00:07:36.173 "superblock": true, 00:07:36.173 "num_base_bdevs": 2, 00:07:36.173 "num_base_bdevs_discovered": 2, 00:07:36.173 "num_base_bdevs_operational": 2, 00:07:36.173 "base_bdevs_list": [ 00:07:36.173 { 00:07:36.173 "name": "pt1", 00:07:36.173 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:36.173 "is_configured": true, 00:07:36.173 "data_offset": 2048, 00:07:36.173 "data_size": 63488 00:07:36.173 }, 00:07:36.173 { 00:07:36.173 "name": "pt2", 00:07:36.173 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:36.173 "is_configured": true, 00:07:36.173 "data_offset": 2048, 00:07:36.173 "data_size": 63488 00:07:36.173 } 00:07:36.173 ] 00:07:36.173 }' 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.173 03:07:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.744 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:36.744 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:36.744 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:36.744 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:36.744 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:36.744 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:36.744 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:36.744 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.744 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:36.744 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.744 [2024-11-18 03:07:40.054242] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:36.744 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.744 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:36.744 "name": "raid_bdev1", 00:07:36.744 "aliases": [ 00:07:36.744 "b36296e4-074d-4d69-b064-ba1aea3297ee" 00:07:36.744 ], 00:07:36.744 "product_name": "Raid Volume", 00:07:36.744 "block_size": 512, 00:07:36.744 "num_blocks": 126976, 00:07:36.744 "uuid": "b36296e4-074d-4d69-b064-ba1aea3297ee", 00:07:36.744 "assigned_rate_limits": { 00:07:36.744 "rw_ios_per_sec": 0, 00:07:36.744 "rw_mbytes_per_sec": 0, 00:07:36.744 "r_mbytes_per_sec": 0, 00:07:36.744 "w_mbytes_per_sec": 0 00:07:36.744 }, 00:07:36.744 "claimed": false, 00:07:36.744 "zoned": false, 00:07:36.744 "supported_io_types": { 00:07:36.744 "read": true, 00:07:36.744 "write": true, 00:07:36.744 "unmap": true, 00:07:36.744 "flush": true, 00:07:36.744 "reset": true, 00:07:36.744 "nvme_admin": false, 00:07:36.744 "nvme_io": false, 00:07:36.744 "nvme_io_md": false, 00:07:36.744 "write_zeroes": true, 00:07:36.744 "zcopy": false, 00:07:36.744 "get_zone_info": false, 00:07:36.744 "zone_management": false, 00:07:36.744 "zone_append": false, 00:07:36.744 "compare": false, 00:07:36.744 "compare_and_write": false, 00:07:36.744 "abort": false, 00:07:36.744 "seek_hole": false, 00:07:36.744 "seek_data": false, 00:07:36.744 "copy": false, 00:07:36.744 "nvme_iov_md": false 00:07:36.744 }, 00:07:36.744 "memory_domains": [ 00:07:36.744 { 00:07:36.744 "dma_device_id": "system", 00:07:36.744 "dma_device_type": 1 00:07:36.744 }, 00:07:36.744 { 00:07:36.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.744 "dma_device_type": 2 00:07:36.744 }, 00:07:36.744 { 00:07:36.744 "dma_device_id": "system", 00:07:36.744 "dma_device_type": 1 00:07:36.744 }, 00:07:36.744 { 00:07:36.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.744 "dma_device_type": 2 00:07:36.744 } 00:07:36.744 ], 00:07:36.744 "driver_specific": { 00:07:36.744 "raid": { 00:07:36.744 "uuid": "b36296e4-074d-4d69-b064-ba1aea3297ee", 00:07:36.744 "strip_size_kb": 64, 00:07:36.744 "state": "online", 00:07:36.744 "raid_level": "raid0", 00:07:36.744 "superblock": true, 00:07:36.744 "num_base_bdevs": 2, 00:07:36.744 "num_base_bdevs_discovered": 2, 00:07:36.744 "num_base_bdevs_operational": 2, 00:07:36.744 "base_bdevs_list": [ 00:07:36.744 { 00:07:36.744 "name": "pt1", 00:07:36.744 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:36.744 "is_configured": true, 00:07:36.744 "data_offset": 2048, 00:07:36.744 "data_size": 63488 00:07:36.744 }, 00:07:36.744 { 00:07:36.744 "name": "pt2", 00:07:36.745 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:36.745 "is_configured": true, 00:07:36.745 "data_offset": 2048, 00:07:36.745 "data_size": 63488 00:07:36.745 } 00:07:36.745 ] 00:07:36.745 } 00:07:36.745 } 00:07:36.745 }' 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:36.745 pt2' 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.745 [2024-11-18 03:07:40.269803] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b36296e4-074d-4d69-b064-ba1aea3297ee 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b36296e4-074d-4d69-b064-ba1aea3297ee ']' 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.745 [2024-11-18 03:07:40.305458] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:36.745 [2024-11-18 03:07:40.305496] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:36.745 [2024-11-18 03:07:40.305575] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:36.745 [2024-11-18 03:07:40.305624] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:36.745 [2024-11-18 03:07:40.305652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.745 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.005 [2024-11-18 03:07:40.425294] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:37.005 [2024-11-18 03:07:40.427380] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:37.005 [2024-11-18 03:07:40.427470] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:37.005 [2024-11-18 03:07:40.427522] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:37.005 [2024-11-18 03:07:40.427541] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:37.005 [2024-11-18 03:07:40.427552] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:07:37.005 request: 00:07:37.005 { 00:07:37.005 "name": "raid_bdev1", 00:07:37.005 "raid_level": "raid0", 00:07:37.005 "base_bdevs": [ 00:07:37.005 "malloc1", 00:07:37.005 "malloc2" 00:07:37.005 ], 00:07:37.005 "strip_size_kb": 64, 00:07:37.005 "superblock": false, 00:07:37.005 "method": "bdev_raid_create", 00:07:37.005 "req_id": 1 00:07:37.005 } 00:07:37.005 Got JSON-RPC error response 00:07:37.005 response: 00:07:37.005 { 00:07:37.005 "code": -17, 00:07:37.005 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:37.005 } 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:37.005 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.006 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:37.006 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:37.006 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:37.006 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.006 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.006 [2024-11-18 03:07:40.489137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:37.006 [2024-11-18 03:07:40.489194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.006 [2024-11-18 03:07:40.489213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:37.006 [2024-11-18 03:07:40.489222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.006 [2024-11-18 03:07:40.491517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.006 [2024-11-18 03:07:40.491552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:37.006 [2024-11-18 03:07:40.491630] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:37.006 [2024-11-18 03:07:40.491671] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:37.006 pt1 00:07:37.006 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.006 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:37.006 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:37.006 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.006 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.006 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.006 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.006 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.006 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.006 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.006 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.006 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.006 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.006 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.006 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:37.006 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.006 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.006 "name": "raid_bdev1", 00:07:37.006 "uuid": "b36296e4-074d-4d69-b064-ba1aea3297ee", 00:07:37.006 "strip_size_kb": 64, 00:07:37.006 "state": "configuring", 00:07:37.006 "raid_level": "raid0", 00:07:37.006 "superblock": true, 00:07:37.006 "num_base_bdevs": 2, 00:07:37.006 "num_base_bdevs_discovered": 1, 00:07:37.006 "num_base_bdevs_operational": 2, 00:07:37.006 "base_bdevs_list": [ 00:07:37.006 { 00:07:37.006 "name": "pt1", 00:07:37.006 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:37.006 "is_configured": true, 00:07:37.006 "data_offset": 2048, 00:07:37.006 "data_size": 63488 00:07:37.006 }, 00:07:37.006 { 00:07:37.006 "name": null, 00:07:37.006 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:37.006 "is_configured": false, 00:07:37.006 "data_offset": 2048, 00:07:37.006 "data_size": 63488 00:07:37.006 } 00:07:37.006 ] 00:07:37.006 }' 00:07:37.006 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.006 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.575 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:37.575 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:37.575 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:37.575 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:37.575 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.575 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.575 [2024-11-18 03:07:40.948397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:37.575 [2024-11-18 03:07:40.948470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.575 [2024-11-18 03:07:40.948495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:37.575 [2024-11-18 03:07:40.948505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.575 [2024-11-18 03:07:40.948922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.575 [2024-11-18 03:07:40.948940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:37.575 [2024-11-18 03:07:40.949032] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:37.575 [2024-11-18 03:07:40.949054] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:37.575 [2024-11-18 03:07:40.949147] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:37.575 [2024-11-18 03:07:40.949157] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:37.575 [2024-11-18 03:07:40.949378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:37.575 [2024-11-18 03:07:40.949482] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:37.575 [2024-11-18 03:07:40.949506] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:37.575 [2024-11-18 03:07:40.949606] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.575 pt2 00:07:37.575 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.575 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:37.575 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:37.575 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:37.575 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:37.575 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.575 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.575 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.575 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.575 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.575 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.575 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.575 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.575 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:37.575 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.575 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.575 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.575 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.575 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.575 "name": "raid_bdev1", 00:07:37.575 "uuid": "b36296e4-074d-4d69-b064-ba1aea3297ee", 00:07:37.575 "strip_size_kb": 64, 00:07:37.575 "state": "online", 00:07:37.575 "raid_level": "raid0", 00:07:37.575 "superblock": true, 00:07:37.575 "num_base_bdevs": 2, 00:07:37.575 "num_base_bdevs_discovered": 2, 00:07:37.575 "num_base_bdevs_operational": 2, 00:07:37.575 "base_bdevs_list": [ 00:07:37.575 { 00:07:37.575 "name": "pt1", 00:07:37.575 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:37.575 "is_configured": true, 00:07:37.575 "data_offset": 2048, 00:07:37.575 "data_size": 63488 00:07:37.576 }, 00:07:37.576 { 00:07:37.576 "name": "pt2", 00:07:37.576 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:37.576 "is_configured": true, 00:07:37.576 "data_offset": 2048, 00:07:37.576 "data_size": 63488 00:07:37.576 } 00:07:37.576 ] 00:07:37.576 }' 00:07:37.576 03:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.576 03:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.836 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:37.836 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:37.836 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:37.836 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:37.836 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:37.836 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:37.836 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:37.836 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:37.836 03:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.836 03:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.836 [2024-11-18 03:07:41.391986] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:38.096 "name": "raid_bdev1", 00:07:38.096 "aliases": [ 00:07:38.096 "b36296e4-074d-4d69-b064-ba1aea3297ee" 00:07:38.096 ], 00:07:38.096 "product_name": "Raid Volume", 00:07:38.096 "block_size": 512, 00:07:38.096 "num_blocks": 126976, 00:07:38.096 "uuid": "b36296e4-074d-4d69-b064-ba1aea3297ee", 00:07:38.096 "assigned_rate_limits": { 00:07:38.096 "rw_ios_per_sec": 0, 00:07:38.096 "rw_mbytes_per_sec": 0, 00:07:38.096 "r_mbytes_per_sec": 0, 00:07:38.096 "w_mbytes_per_sec": 0 00:07:38.096 }, 00:07:38.096 "claimed": false, 00:07:38.096 "zoned": false, 00:07:38.096 "supported_io_types": { 00:07:38.096 "read": true, 00:07:38.096 "write": true, 00:07:38.096 "unmap": true, 00:07:38.096 "flush": true, 00:07:38.096 "reset": true, 00:07:38.096 "nvme_admin": false, 00:07:38.096 "nvme_io": false, 00:07:38.096 "nvme_io_md": false, 00:07:38.096 "write_zeroes": true, 00:07:38.096 "zcopy": false, 00:07:38.096 "get_zone_info": false, 00:07:38.096 "zone_management": false, 00:07:38.096 "zone_append": false, 00:07:38.096 "compare": false, 00:07:38.096 "compare_and_write": false, 00:07:38.096 "abort": false, 00:07:38.096 "seek_hole": false, 00:07:38.096 "seek_data": false, 00:07:38.096 "copy": false, 00:07:38.096 "nvme_iov_md": false 00:07:38.096 }, 00:07:38.096 "memory_domains": [ 00:07:38.096 { 00:07:38.096 "dma_device_id": "system", 00:07:38.096 "dma_device_type": 1 00:07:38.096 }, 00:07:38.096 { 00:07:38.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.096 "dma_device_type": 2 00:07:38.096 }, 00:07:38.096 { 00:07:38.096 "dma_device_id": "system", 00:07:38.096 "dma_device_type": 1 00:07:38.096 }, 00:07:38.096 { 00:07:38.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.096 "dma_device_type": 2 00:07:38.096 } 00:07:38.096 ], 00:07:38.096 "driver_specific": { 00:07:38.096 "raid": { 00:07:38.096 "uuid": "b36296e4-074d-4d69-b064-ba1aea3297ee", 00:07:38.096 "strip_size_kb": 64, 00:07:38.096 "state": "online", 00:07:38.096 "raid_level": "raid0", 00:07:38.096 "superblock": true, 00:07:38.096 "num_base_bdevs": 2, 00:07:38.096 "num_base_bdevs_discovered": 2, 00:07:38.096 "num_base_bdevs_operational": 2, 00:07:38.096 "base_bdevs_list": [ 00:07:38.096 { 00:07:38.096 "name": "pt1", 00:07:38.096 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:38.096 "is_configured": true, 00:07:38.096 "data_offset": 2048, 00:07:38.096 "data_size": 63488 00:07:38.096 }, 00:07:38.096 { 00:07:38.096 "name": "pt2", 00:07:38.096 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:38.096 "is_configured": true, 00:07:38.096 "data_offset": 2048, 00:07:38.096 "data_size": 63488 00:07:38.096 } 00:07:38.096 ] 00:07:38.096 } 00:07:38.096 } 00:07:38.096 }' 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:38.096 pt2' 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.096 [2024-11-18 03:07:41.595659] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b36296e4-074d-4d69-b064-ba1aea3297ee '!=' b36296e4-074d-4d69-b064-ba1aea3297ee ']' 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72731 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72731 ']' 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72731 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72731 00:07:38.096 killing process with pid 72731 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72731' 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72731 00:07:38.096 [2024-11-18 03:07:41.660674] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:38.096 [2024-11-18 03:07:41.660759] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:38.096 [2024-11-18 03:07:41.660810] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:38.096 [2024-11-18 03:07:41.660820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:38.096 03:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72731 00:07:38.356 [2024-11-18 03:07:41.683901] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:38.356 03:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:38.356 00:07:38.356 real 0m3.271s 00:07:38.356 user 0m5.062s 00:07:38.356 sys 0m0.662s 00:07:38.356 03:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.356 03:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.356 ************************************ 00:07:38.356 END TEST raid_superblock_test 00:07:38.356 ************************************ 00:07:38.617 03:07:41 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:38.617 03:07:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:38.617 03:07:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.617 03:07:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:38.617 ************************************ 00:07:38.617 START TEST raid_read_error_test 00:07:38.617 ************************************ 00:07:38.617 03:07:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:07:38.617 03:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:38.617 03:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:38.617 03:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:38.617 03:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:38.617 03:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.617 03:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:38.617 03:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:38.617 03:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.617 03:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:38.617 03:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:38.617 03:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.617 03:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:38.617 03:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:38.617 03:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:38.617 03:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:38.617 03:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:38.617 03:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:38.617 03:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:38.617 03:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:38.617 03:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:38.617 03:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:38.617 03:07:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:38.617 03:07:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.cel4gljc4u 00:07:38.617 03:07:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72927 00:07:38.617 03:07:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:38.617 03:07:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72927 00:07:38.617 03:07:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 72927 ']' 00:07:38.617 03:07:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.617 03:07:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:38.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.617 03:07:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.617 03:07:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:38.617 03:07:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.617 [2024-11-18 03:07:42.085033] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:38.617 [2024-11-18 03:07:42.085188] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72927 ] 00:07:38.877 [2024-11-18 03:07:42.247671] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.877 [2024-11-18 03:07:42.298003] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.877 [2024-11-18 03:07:42.339941] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.877 [2024-11-18 03:07:42.339985] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.447 03:07:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.447 03:07:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:39.447 03:07:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:39.447 03:07:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:39.447 03:07:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.447 03:07:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.447 BaseBdev1_malloc 00:07:39.447 03:07:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.447 03:07:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:39.447 03:07:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.447 03:07:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.447 true 00:07:39.447 03:07:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.447 03:07:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:39.447 03:07:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.447 03:07:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.447 [2024-11-18 03:07:42.958112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:39.447 [2024-11-18 03:07:42.958168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.447 [2024-11-18 03:07:42.958206] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:39.447 [2024-11-18 03:07:42.958215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.447 [2024-11-18 03:07:42.960401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.447 [2024-11-18 03:07:42.960442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:39.447 BaseBdev1 00:07:39.447 03:07:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.447 03:07:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:39.447 03:07:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:39.447 03:07:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.447 03:07:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.447 BaseBdev2_malloc 00:07:39.447 03:07:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.447 03:07:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:39.447 03:07:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.447 03:07:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.447 true 00:07:39.447 03:07:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.447 03:07:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:39.447 03:07:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.447 03:07:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.447 [2024-11-18 03:07:43.006489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:39.447 [2024-11-18 03:07:43.006549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.447 [2024-11-18 03:07:43.006585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:39.447 [2024-11-18 03:07:43.006593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.447 [2024-11-18 03:07:43.008708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.447 [2024-11-18 03:07:43.008750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:39.447 BaseBdev2 00:07:39.447 03:07:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.447 03:07:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:39.447 03:07:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.447 03:07:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.447 [2024-11-18 03:07:43.018516] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:39.447 [2024-11-18 03:07:43.020408] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:39.447 [2024-11-18 03:07:43.020602] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:39.447 [2024-11-18 03:07:43.020623] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:39.447 [2024-11-18 03:07:43.020914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:39.447 [2024-11-18 03:07:43.021078] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:39.447 [2024-11-18 03:07:43.021099] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:39.447 [2024-11-18 03:07:43.021248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.707 03:07:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.707 03:07:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:39.707 03:07:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:39.707 03:07:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.707 03:07:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:39.707 03:07:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.707 03:07:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.707 03:07:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.707 03:07:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.707 03:07:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.707 03:07:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.707 03:07:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.707 03:07:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:39.707 03:07:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.707 03:07:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.707 03:07:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.707 03:07:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.707 "name": "raid_bdev1", 00:07:39.707 "uuid": "4988856b-9188-4ca6-b73f-eef6c168475d", 00:07:39.707 "strip_size_kb": 64, 00:07:39.707 "state": "online", 00:07:39.707 "raid_level": "raid0", 00:07:39.708 "superblock": true, 00:07:39.708 "num_base_bdevs": 2, 00:07:39.708 "num_base_bdevs_discovered": 2, 00:07:39.708 "num_base_bdevs_operational": 2, 00:07:39.708 "base_bdevs_list": [ 00:07:39.708 { 00:07:39.708 "name": "BaseBdev1", 00:07:39.708 "uuid": "aaf5bfd1-bbb2-5f41-8507-ff2311b1dda2", 00:07:39.708 "is_configured": true, 00:07:39.708 "data_offset": 2048, 00:07:39.708 "data_size": 63488 00:07:39.708 }, 00:07:39.708 { 00:07:39.708 "name": "BaseBdev2", 00:07:39.708 "uuid": "03840c33-e498-5417-9c02-0b2f5c8469c8", 00:07:39.708 "is_configured": true, 00:07:39.708 "data_offset": 2048, 00:07:39.708 "data_size": 63488 00:07:39.708 } 00:07:39.708 ] 00:07:39.708 }' 00:07:39.708 03:07:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.708 03:07:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.967 03:07:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:39.968 03:07:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:40.227 [2024-11-18 03:07:43.557990] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:41.165 03:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:41.165 03:07:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.165 03:07:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.165 03:07:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.165 03:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:41.165 03:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:41.165 03:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:41.165 03:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:41.165 03:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:41.165 03:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:41.165 03:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:41.165 03:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.165 03:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.165 03:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.165 03:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.165 03:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.165 03:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.165 03:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.165 03:07:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.165 03:07:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.165 03:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:41.165 03:07:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.165 03:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.165 "name": "raid_bdev1", 00:07:41.165 "uuid": "4988856b-9188-4ca6-b73f-eef6c168475d", 00:07:41.165 "strip_size_kb": 64, 00:07:41.165 "state": "online", 00:07:41.165 "raid_level": "raid0", 00:07:41.165 "superblock": true, 00:07:41.165 "num_base_bdevs": 2, 00:07:41.165 "num_base_bdevs_discovered": 2, 00:07:41.165 "num_base_bdevs_operational": 2, 00:07:41.165 "base_bdevs_list": [ 00:07:41.165 { 00:07:41.165 "name": "BaseBdev1", 00:07:41.165 "uuid": "aaf5bfd1-bbb2-5f41-8507-ff2311b1dda2", 00:07:41.165 "is_configured": true, 00:07:41.165 "data_offset": 2048, 00:07:41.165 "data_size": 63488 00:07:41.165 }, 00:07:41.165 { 00:07:41.165 "name": "BaseBdev2", 00:07:41.165 "uuid": "03840c33-e498-5417-9c02-0b2f5c8469c8", 00:07:41.165 "is_configured": true, 00:07:41.165 "data_offset": 2048, 00:07:41.165 "data_size": 63488 00:07:41.165 } 00:07:41.165 ] 00:07:41.165 }' 00:07:41.165 03:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.165 03:07:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.424 03:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:41.424 03:07:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.424 03:07:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.424 [2024-11-18 03:07:44.874039] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:41.424 [2024-11-18 03:07:44.874075] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:41.424 [2024-11-18 03:07:44.876851] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.424 [2024-11-18 03:07:44.876936] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.424 [2024-11-18 03:07:44.877005] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:41.424 [2024-11-18 03:07:44.877017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:41.424 { 00:07:41.424 "results": [ 00:07:41.424 { 00:07:41.424 "job": "raid_bdev1", 00:07:41.424 "core_mask": "0x1", 00:07:41.424 "workload": "randrw", 00:07:41.424 "percentage": 50, 00:07:41.424 "status": "finished", 00:07:41.424 "queue_depth": 1, 00:07:41.424 "io_size": 131072, 00:07:41.424 "runtime": 1.316644, 00:07:41.424 "iops": 16065.08669009998, 00:07:41.424 "mibps": 2008.1358362624976, 00:07:41.424 "io_failed": 1, 00:07:41.424 "io_timeout": 0, 00:07:41.424 "avg_latency_us": 86.12702504130336, 00:07:41.424 "min_latency_us": 27.72401746724891, 00:07:41.424 "max_latency_us": 1659.8637554585152 00:07:41.424 } 00:07:41.424 ], 00:07:41.424 "core_count": 1 00:07:41.424 } 00:07:41.424 03:07:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.424 03:07:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72927 00:07:41.424 03:07:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 72927 ']' 00:07:41.424 03:07:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 72927 00:07:41.424 03:07:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:41.424 03:07:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:41.424 03:07:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72927 00:07:41.424 03:07:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:41.424 03:07:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:41.424 03:07:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72927' 00:07:41.424 killing process with pid 72927 00:07:41.424 03:07:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 72927 00:07:41.424 [2024-11-18 03:07:44.921539] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:41.424 03:07:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 72927 00:07:41.424 [2024-11-18 03:07:44.937373] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:41.685 03:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.cel4gljc4u 00:07:41.685 03:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:41.685 03:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:41.685 03:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:07:41.685 03:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:41.685 ************************************ 00:07:41.685 END TEST raid_read_error_test 00:07:41.685 ************************************ 00:07:41.685 03:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:41.685 03:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:41.685 03:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:07:41.685 00:07:41.685 real 0m3.195s 00:07:41.685 user 0m4.065s 00:07:41.685 sys 0m0.483s 00:07:41.685 03:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.685 03:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.685 03:07:45 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:41.685 03:07:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:41.685 03:07:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.685 03:07:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:41.685 ************************************ 00:07:41.685 START TEST raid_write_error_test 00:07:41.685 ************************************ 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VhWhZoHU8S 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73062 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73062 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73062 ']' 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.685 03:07:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:41.944 [2024-11-18 03:07:45.333449] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:41.944 [2024-11-18 03:07:45.333596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73062 ] 00:07:41.944 [2024-11-18 03:07:45.494624] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.204 [2024-11-18 03:07:45.545438] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.204 [2024-11-18 03:07:45.587977] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.204 [2024-11-18 03:07:45.588017] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.773 BaseBdev1_malloc 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.773 true 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.773 [2024-11-18 03:07:46.210317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:42.773 [2024-11-18 03:07:46.210379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:42.773 [2024-11-18 03:07:46.210399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:42.773 [2024-11-18 03:07:46.210414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:42.773 [2024-11-18 03:07:46.212604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:42.773 [2024-11-18 03:07:46.212645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:42.773 BaseBdev1 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.773 BaseBdev2_malloc 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.773 true 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.773 [2024-11-18 03:07:46.251100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:42.773 [2024-11-18 03:07:46.251158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:42.773 [2024-11-18 03:07:46.251178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:42.773 [2024-11-18 03:07:46.251187] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:42.773 [2024-11-18 03:07:46.253437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:42.773 [2024-11-18 03:07:46.253476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:42.773 BaseBdev2 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.773 [2024-11-18 03:07:46.259159] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:42.773 [2024-11-18 03:07:46.261251] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:42.773 [2024-11-18 03:07:46.261426] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:42.773 [2024-11-18 03:07:46.261439] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:42.773 [2024-11-18 03:07:46.261720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:42.773 [2024-11-18 03:07:46.261857] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:42.773 [2024-11-18 03:07:46.261870] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:42.773 [2024-11-18 03:07:46.262057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.773 03:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.773 "name": "raid_bdev1", 00:07:42.773 "uuid": "49f87c09-2499-43ee-a95c-8969ca3928e1", 00:07:42.773 "strip_size_kb": 64, 00:07:42.773 "state": "online", 00:07:42.773 "raid_level": "raid0", 00:07:42.773 "superblock": true, 00:07:42.773 "num_base_bdevs": 2, 00:07:42.773 "num_base_bdevs_discovered": 2, 00:07:42.773 "num_base_bdevs_operational": 2, 00:07:42.773 "base_bdevs_list": [ 00:07:42.773 { 00:07:42.773 "name": "BaseBdev1", 00:07:42.773 "uuid": "56b51c83-d7c7-5c9b-870d-8ae7b1a70886", 00:07:42.773 "is_configured": true, 00:07:42.774 "data_offset": 2048, 00:07:42.774 "data_size": 63488 00:07:42.774 }, 00:07:42.774 { 00:07:42.774 "name": "BaseBdev2", 00:07:42.774 "uuid": "376bb30d-c61d-5a17-a637-9d9102e4cb6f", 00:07:42.774 "is_configured": true, 00:07:42.774 "data_offset": 2048, 00:07:42.774 "data_size": 63488 00:07:42.774 } 00:07:42.774 ] 00:07:42.774 }' 00:07:42.774 03:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.774 03:07:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.342 03:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:43.342 03:07:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:43.342 [2024-11-18 03:07:46.810563] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:44.306 03:07:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:44.306 03:07:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.306 03:07:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.306 03:07:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.306 03:07:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:44.306 03:07:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:44.306 03:07:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:44.306 03:07:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:44.306 03:07:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:44.306 03:07:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:44.306 03:07:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:44.306 03:07:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.306 03:07:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.306 03:07:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.306 03:07:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.306 03:07:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.306 03:07:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.306 03:07:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.306 03:07:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:44.306 03:07:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.306 03:07:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.306 03:07:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.306 03:07:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.306 "name": "raid_bdev1", 00:07:44.306 "uuid": "49f87c09-2499-43ee-a95c-8969ca3928e1", 00:07:44.306 "strip_size_kb": 64, 00:07:44.306 "state": "online", 00:07:44.306 "raid_level": "raid0", 00:07:44.306 "superblock": true, 00:07:44.306 "num_base_bdevs": 2, 00:07:44.306 "num_base_bdevs_discovered": 2, 00:07:44.306 "num_base_bdevs_operational": 2, 00:07:44.306 "base_bdevs_list": [ 00:07:44.306 { 00:07:44.306 "name": "BaseBdev1", 00:07:44.306 "uuid": "56b51c83-d7c7-5c9b-870d-8ae7b1a70886", 00:07:44.306 "is_configured": true, 00:07:44.306 "data_offset": 2048, 00:07:44.306 "data_size": 63488 00:07:44.306 }, 00:07:44.306 { 00:07:44.306 "name": "BaseBdev2", 00:07:44.306 "uuid": "376bb30d-c61d-5a17-a637-9d9102e4cb6f", 00:07:44.306 "is_configured": true, 00:07:44.306 "data_offset": 2048, 00:07:44.306 "data_size": 63488 00:07:44.306 } 00:07:44.306 ] 00:07:44.306 }' 00:07:44.306 03:07:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.306 03:07:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.873 03:07:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:44.873 03:07:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.873 03:07:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.873 [2024-11-18 03:07:48.174644] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:44.873 [2024-11-18 03:07:48.174737] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:44.873 [2024-11-18 03:07:48.177461] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:44.873 [2024-11-18 03:07:48.177544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:44.873 [2024-11-18 03:07:48.177596] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:44.873 [2024-11-18 03:07:48.177639] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:44.873 { 00:07:44.873 "results": [ 00:07:44.873 { 00:07:44.873 "job": "raid_bdev1", 00:07:44.873 "core_mask": "0x1", 00:07:44.873 "workload": "randrw", 00:07:44.873 "percentage": 50, 00:07:44.873 "status": "finished", 00:07:44.873 "queue_depth": 1, 00:07:44.873 "io_size": 131072, 00:07:44.873 "runtime": 1.364876, 00:07:44.873 "iops": 16775.882937351085, 00:07:44.873 "mibps": 2096.9853671688857, 00:07:44.873 "io_failed": 1, 00:07:44.873 "io_timeout": 0, 00:07:44.873 "avg_latency_us": 82.61858547932906, 00:07:44.873 "min_latency_us": 26.270742358078603, 00:07:44.873 "max_latency_us": 1466.6899563318777 00:07:44.873 } 00:07:44.873 ], 00:07:44.873 "core_count": 1 00:07:44.873 } 00:07:44.873 03:07:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.873 03:07:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73062 00:07:44.873 03:07:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73062 ']' 00:07:44.873 03:07:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73062 00:07:44.873 03:07:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:44.873 03:07:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:44.873 03:07:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73062 00:07:44.873 killing process with pid 73062 00:07:44.873 03:07:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:44.873 03:07:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:44.873 03:07:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73062' 00:07:44.873 03:07:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73062 00:07:44.873 [2024-11-18 03:07:48.214329] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:44.873 03:07:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73062 00:07:44.873 [2024-11-18 03:07:48.230590] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:45.131 03:07:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:45.131 03:07:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VhWhZoHU8S 00:07:45.131 03:07:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:45.131 ************************************ 00:07:45.131 END TEST raid_write_error_test 00:07:45.131 ************************************ 00:07:45.131 03:07:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:45.131 03:07:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:45.131 03:07:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:45.131 03:07:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:45.131 03:07:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:45.131 00:07:45.131 real 0m3.236s 00:07:45.131 user 0m4.141s 00:07:45.131 sys 0m0.492s 00:07:45.131 03:07:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.131 03:07:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.131 03:07:48 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:45.131 03:07:48 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:45.131 03:07:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:45.131 03:07:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.131 03:07:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:45.131 ************************************ 00:07:45.131 START TEST raid_state_function_test 00:07:45.131 ************************************ 00:07:45.131 03:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:45.132 Process raid pid: 73189 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73189 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73189' 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73189 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73189 ']' 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.132 03:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.132 [2024-11-18 03:07:48.639305] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:45.132 [2024-11-18 03:07:48.639524] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.390 [2024-11-18 03:07:48.801674] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.390 [2024-11-18 03:07:48.852021] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.390 [2024-11-18 03:07:48.894733] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.390 [2024-11-18 03:07:48.894767] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.955 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.955 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:45.955 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:45.955 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.956 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.956 [2024-11-18 03:07:49.492398] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:45.956 [2024-11-18 03:07:49.492459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:45.956 [2024-11-18 03:07:49.492472] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:45.956 [2024-11-18 03:07:49.492482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:45.956 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.956 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:45.956 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.956 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.956 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:45.956 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.956 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.956 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.956 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.956 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.956 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.956 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.956 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.956 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.956 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.956 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.213 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.213 "name": "Existed_Raid", 00:07:46.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.213 "strip_size_kb": 64, 00:07:46.213 "state": "configuring", 00:07:46.213 "raid_level": "concat", 00:07:46.213 "superblock": false, 00:07:46.213 "num_base_bdevs": 2, 00:07:46.213 "num_base_bdevs_discovered": 0, 00:07:46.213 "num_base_bdevs_operational": 2, 00:07:46.213 "base_bdevs_list": [ 00:07:46.213 { 00:07:46.213 "name": "BaseBdev1", 00:07:46.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.213 "is_configured": false, 00:07:46.213 "data_offset": 0, 00:07:46.213 "data_size": 0 00:07:46.213 }, 00:07:46.213 { 00:07:46.213 "name": "BaseBdev2", 00:07:46.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.213 "is_configured": false, 00:07:46.213 "data_offset": 0, 00:07:46.213 "data_size": 0 00:07:46.213 } 00:07:46.213 ] 00:07:46.213 }' 00:07:46.213 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.213 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.471 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:46.471 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.471 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.471 [2024-11-18 03:07:49.911591] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:46.471 [2024-11-18 03:07:49.911724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:46.471 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.471 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:46.471 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.471 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.471 [2024-11-18 03:07:49.923651] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:46.471 [2024-11-18 03:07:49.923746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:46.471 [2024-11-18 03:07:49.923774] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:46.471 [2024-11-18 03:07:49.923797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:46.471 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.471 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:46.471 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.471 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.472 [2024-11-18 03:07:49.944599] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:46.472 BaseBdev1 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.472 [ 00:07:46.472 { 00:07:46.472 "name": "BaseBdev1", 00:07:46.472 "aliases": [ 00:07:46.472 "941ebe0b-3c4a-4af6-ab8c-2947dac7ac3c" 00:07:46.472 ], 00:07:46.472 "product_name": "Malloc disk", 00:07:46.472 "block_size": 512, 00:07:46.472 "num_blocks": 65536, 00:07:46.472 "uuid": "941ebe0b-3c4a-4af6-ab8c-2947dac7ac3c", 00:07:46.472 "assigned_rate_limits": { 00:07:46.472 "rw_ios_per_sec": 0, 00:07:46.472 "rw_mbytes_per_sec": 0, 00:07:46.472 "r_mbytes_per_sec": 0, 00:07:46.472 "w_mbytes_per_sec": 0 00:07:46.472 }, 00:07:46.472 "claimed": true, 00:07:46.472 "claim_type": "exclusive_write", 00:07:46.472 "zoned": false, 00:07:46.472 "supported_io_types": { 00:07:46.472 "read": true, 00:07:46.472 "write": true, 00:07:46.472 "unmap": true, 00:07:46.472 "flush": true, 00:07:46.472 "reset": true, 00:07:46.472 "nvme_admin": false, 00:07:46.472 "nvme_io": false, 00:07:46.472 "nvme_io_md": false, 00:07:46.472 "write_zeroes": true, 00:07:46.472 "zcopy": true, 00:07:46.472 "get_zone_info": false, 00:07:46.472 "zone_management": false, 00:07:46.472 "zone_append": false, 00:07:46.472 "compare": false, 00:07:46.472 "compare_and_write": false, 00:07:46.472 "abort": true, 00:07:46.472 "seek_hole": false, 00:07:46.472 "seek_data": false, 00:07:46.472 "copy": true, 00:07:46.472 "nvme_iov_md": false 00:07:46.472 }, 00:07:46.472 "memory_domains": [ 00:07:46.472 { 00:07:46.472 "dma_device_id": "system", 00:07:46.472 "dma_device_type": 1 00:07:46.472 }, 00:07:46.472 { 00:07:46.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.472 "dma_device_type": 2 00:07:46.472 } 00:07:46.472 ], 00:07:46.472 "driver_specific": {} 00:07:46.472 } 00:07:46.472 ] 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.472 03:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.472 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.472 "name": "Existed_Raid", 00:07:46.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.472 "strip_size_kb": 64, 00:07:46.472 "state": "configuring", 00:07:46.472 "raid_level": "concat", 00:07:46.472 "superblock": false, 00:07:46.472 "num_base_bdevs": 2, 00:07:46.472 "num_base_bdevs_discovered": 1, 00:07:46.472 "num_base_bdevs_operational": 2, 00:07:46.472 "base_bdevs_list": [ 00:07:46.472 { 00:07:46.472 "name": "BaseBdev1", 00:07:46.472 "uuid": "941ebe0b-3c4a-4af6-ab8c-2947dac7ac3c", 00:07:46.472 "is_configured": true, 00:07:46.472 "data_offset": 0, 00:07:46.472 "data_size": 65536 00:07:46.472 }, 00:07:46.472 { 00:07:46.472 "name": "BaseBdev2", 00:07:46.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.472 "is_configured": false, 00:07:46.472 "data_offset": 0, 00:07:46.472 "data_size": 0 00:07:46.472 } 00:07:46.472 ] 00:07:46.472 }' 00:07:46.472 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.472 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.039 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:47.039 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.039 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.039 [2024-11-18 03:07:50.355989] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:47.039 [2024-11-18 03:07:50.356110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:47.039 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.039 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:47.039 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.039 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.039 [2024-11-18 03:07:50.364025] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:47.039 [2024-11-18 03:07:50.366179] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:47.039 [2024-11-18 03:07:50.366300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:47.039 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.039 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:47.039 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:47.039 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:47.039 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.039 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:47.039 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:47.039 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.039 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.039 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.039 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.039 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.039 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.039 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.039 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.039 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.039 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.039 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.039 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.039 "name": "Existed_Raid", 00:07:47.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.039 "strip_size_kb": 64, 00:07:47.039 "state": "configuring", 00:07:47.039 "raid_level": "concat", 00:07:47.039 "superblock": false, 00:07:47.039 "num_base_bdevs": 2, 00:07:47.039 "num_base_bdevs_discovered": 1, 00:07:47.039 "num_base_bdevs_operational": 2, 00:07:47.039 "base_bdevs_list": [ 00:07:47.039 { 00:07:47.039 "name": "BaseBdev1", 00:07:47.039 "uuid": "941ebe0b-3c4a-4af6-ab8c-2947dac7ac3c", 00:07:47.039 "is_configured": true, 00:07:47.039 "data_offset": 0, 00:07:47.039 "data_size": 65536 00:07:47.039 }, 00:07:47.039 { 00:07:47.039 "name": "BaseBdev2", 00:07:47.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.039 "is_configured": false, 00:07:47.039 "data_offset": 0, 00:07:47.039 "data_size": 0 00:07:47.039 } 00:07:47.039 ] 00:07:47.039 }' 00:07:47.039 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.039 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.297 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:47.297 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.297 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.297 [2024-11-18 03:07:50.806448] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:47.297 [2024-11-18 03:07:50.806500] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:47.297 [2024-11-18 03:07:50.806509] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:47.297 [2024-11-18 03:07:50.806789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:47.297 [2024-11-18 03:07:50.806936] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:47.297 [2024-11-18 03:07:50.806952] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:47.297 [2024-11-18 03:07:50.807194] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.297 BaseBdev2 00:07:47.297 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.297 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:47.297 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:47.297 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:47.297 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:47.297 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:47.297 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:47.297 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:47.297 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.297 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.297 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.297 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:47.297 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.297 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.297 [ 00:07:47.297 { 00:07:47.297 "name": "BaseBdev2", 00:07:47.297 "aliases": [ 00:07:47.297 "44539b14-d79c-41dd-95c1-f573a963def4" 00:07:47.297 ], 00:07:47.297 "product_name": "Malloc disk", 00:07:47.297 "block_size": 512, 00:07:47.297 "num_blocks": 65536, 00:07:47.297 "uuid": "44539b14-d79c-41dd-95c1-f573a963def4", 00:07:47.297 "assigned_rate_limits": { 00:07:47.297 "rw_ios_per_sec": 0, 00:07:47.297 "rw_mbytes_per_sec": 0, 00:07:47.297 "r_mbytes_per_sec": 0, 00:07:47.297 "w_mbytes_per_sec": 0 00:07:47.297 }, 00:07:47.297 "claimed": true, 00:07:47.297 "claim_type": "exclusive_write", 00:07:47.297 "zoned": false, 00:07:47.298 "supported_io_types": { 00:07:47.298 "read": true, 00:07:47.298 "write": true, 00:07:47.298 "unmap": true, 00:07:47.298 "flush": true, 00:07:47.298 "reset": true, 00:07:47.298 "nvme_admin": false, 00:07:47.298 "nvme_io": false, 00:07:47.298 "nvme_io_md": false, 00:07:47.298 "write_zeroes": true, 00:07:47.298 "zcopy": true, 00:07:47.298 "get_zone_info": false, 00:07:47.298 "zone_management": false, 00:07:47.298 "zone_append": false, 00:07:47.298 "compare": false, 00:07:47.298 "compare_and_write": false, 00:07:47.298 "abort": true, 00:07:47.298 "seek_hole": false, 00:07:47.298 "seek_data": false, 00:07:47.298 "copy": true, 00:07:47.298 "nvme_iov_md": false 00:07:47.298 }, 00:07:47.298 "memory_domains": [ 00:07:47.298 { 00:07:47.298 "dma_device_id": "system", 00:07:47.298 "dma_device_type": 1 00:07:47.298 }, 00:07:47.298 { 00:07:47.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.298 "dma_device_type": 2 00:07:47.298 } 00:07:47.298 ], 00:07:47.298 "driver_specific": {} 00:07:47.298 } 00:07:47.298 ] 00:07:47.298 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.298 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:47.298 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:47.298 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:47.298 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:47.298 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.298 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.298 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:47.298 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.298 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.298 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.298 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.298 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.298 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.298 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.298 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.298 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.298 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.298 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.556 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.556 "name": "Existed_Raid", 00:07:47.556 "uuid": "10150efa-613f-4a55-83d4-dcf19e81ae64", 00:07:47.556 "strip_size_kb": 64, 00:07:47.556 "state": "online", 00:07:47.556 "raid_level": "concat", 00:07:47.556 "superblock": false, 00:07:47.556 "num_base_bdevs": 2, 00:07:47.556 "num_base_bdevs_discovered": 2, 00:07:47.556 "num_base_bdevs_operational": 2, 00:07:47.556 "base_bdevs_list": [ 00:07:47.556 { 00:07:47.556 "name": "BaseBdev1", 00:07:47.556 "uuid": "941ebe0b-3c4a-4af6-ab8c-2947dac7ac3c", 00:07:47.556 "is_configured": true, 00:07:47.556 "data_offset": 0, 00:07:47.556 "data_size": 65536 00:07:47.556 }, 00:07:47.556 { 00:07:47.556 "name": "BaseBdev2", 00:07:47.556 "uuid": "44539b14-d79c-41dd-95c1-f573a963def4", 00:07:47.556 "is_configured": true, 00:07:47.556 "data_offset": 0, 00:07:47.556 "data_size": 65536 00:07:47.556 } 00:07:47.556 ] 00:07:47.556 }' 00:07:47.556 03:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.556 03:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.815 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:47.815 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:47.815 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:47.815 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:47.815 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:47.815 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:47.815 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:47.815 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:47.815 03:07:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.815 03:07:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.815 [2024-11-18 03:07:51.270060] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.815 03:07:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.815 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:47.815 "name": "Existed_Raid", 00:07:47.815 "aliases": [ 00:07:47.815 "10150efa-613f-4a55-83d4-dcf19e81ae64" 00:07:47.815 ], 00:07:47.815 "product_name": "Raid Volume", 00:07:47.815 "block_size": 512, 00:07:47.815 "num_blocks": 131072, 00:07:47.815 "uuid": "10150efa-613f-4a55-83d4-dcf19e81ae64", 00:07:47.815 "assigned_rate_limits": { 00:07:47.815 "rw_ios_per_sec": 0, 00:07:47.815 "rw_mbytes_per_sec": 0, 00:07:47.815 "r_mbytes_per_sec": 0, 00:07:47.815 "w_mbytes_per_sec": 0 00:07:47.815 }, 00:07:47.815 "claimed": false, 00:07:47.815 "zoned": false, 00:07:47.815 "supported_io_types": { 00:07:47.815 "read": true, 00:07:47.815 "write": true, 00:07:47.815 "unmap": true, 00:07:47.815 "flush": true, 00:07:47.815 "reset": true, 00:07:47.815 "nvme_admin": false, 00:07:47.815 "nvme_io": false, 00:07:47.815 "nvme_io_md": false, 00:07:47.815 "write_zeroes": true, 00:07:47.815 "zcopy": false, 00:07:47.815 "get_zone_info": false, 00:07:47.815 "zone_management": false, 00:07:47.815 "zone_append": false, 00:07:47.815 "compare": false, 00:07:47.815 "compare_and_write": false, 00:07:47.815 "abort": false, 00:07:47.815 "seek_hole": false, 00:07:47.815 "seek_data": false, 00:07:47.815 "copy": false, 00:07:47.815 "nvme_iov_md": false 00:07:47.815 }, 00:07:47.815 "memory_domains": [ 00:07:47.815 { 00:07:47.815 "dma_device_id": "system", 00:07:47.815 "dma_device_type": 1 00:07:47.815 }, 00:07:47.815 { 00:07:47.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.815 "dma_device_type": 2 00:07:47.815 }, 00:07:47.815 { 00:07:47.815 "dma_device_id": "system", 00:07:47.815 "dma_device_type": 1 00:07:47.815 }, 00:07:47.815 { 00:07:47.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.815 "dma_device_type": 2 00:07:47.815 } 00:07:47.815 ], 00:07:47.815 "driver_specific": { 00:07:47.815 "raid": { 00:07:47.815 "uuid": "10150efa-613f-4a55-83d4-dcf19e81ae64", 00:07:47.815 "strip_size_kb": 64, 00:07:47.815 "state": "online", 00:07:47.815 "raid_level": "concat", 00:07:47.815 "superblock": false, 00:07:47.815 "num_base_bdevs": 2, 00:07:47.815 "num_base_bdevs_discovered": 2, 00:07:47.815 "num_base_bdevs_operational": 2, 00:07:47.815 "base_bdevs_list": [ 00:07:47.815 { 00:07:47.815 "name": "BaseBdev1", 00:07:47.815 "uuid": "941ebe0b-3c4a-4af6-ab8c-2947dac7ac3c", 00:07:47.815 "is_configured": true, 00:07:47.815 "data_offset": 0, 00:07:47.815 "data_size": 65536 00:07:47.815 }, 00:07:47.815 { 00:07:47.815 "name": "BaseBdev2", 00:07:47.815 "uuid": "44539b14-d79c-41dd-95c1-f573a963def4", 00:07:47.815 "is_configured": true, 00:07:47.815 "data_offset": 0, 00:07:47.815 "data_size": 65536 00:07:47.815 } 00:07:47.815 ] 00:07:47.815 } 00:07:47.815 } 00:07:47.815 }' 00:07:47.815 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:47.815 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:47.815 BaseBdev2' 00:07:47.815 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.074 [2024-11-18 03:07:51.505383] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:48.074 [2024-11-18 03:07:51.505474] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:48.074 [2024-11-18 03:07:51.505566] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.074 "name": "Existed_Raid", 00:07:48.074 "uuid": "10150efa-613f-4a55-83d4-dcf19e81ae64", 00:07:48.074 "strip_size_kb": 64, 00:07:48.074 "state": "offline", 00:07:48.074 "raid_level": "concat", 00:07:48.074 "superblock": false, 00:07:48.074 "num_base_bdevs": 2, 00:07:48.074 "num_base_bdevs_discovered": 1, 00:07:48.074 "num_base_bdevs_operational": 1, 00:07:48.074 "base_bdevs_list": [ 00:07:48.074 { 00:07:48.074 "name": null, 00:07:48.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.074 "is_configured": false, 00:07:48.074 "data_offset": 0, 00:07:48.074 "data_size": 65536 00:07:48.074 }, 00:07:48.074 { 00:07:48.074 "name": "BaseBdev2", 00:07:48.074 "uuid": "44539b14-d79c-41dd-95c1-f573a963def4", 00:07:48.074 "is_configured": true, 00:07:48.074 "data_offset": 0, 00:07:48.074 "data_size": 65536 00:07:48.074 } 00:07:48.074 ] 00:07:48.074 }' 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.074 03:07:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.641 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:48.641 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:48.641 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:48.641 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.641 03:07:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.641 03:07:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.641 03:07:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.641 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:48.641 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:48.641 03:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:48.641 03:07:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.641 03:07:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.641 [2024-11-18 03:07:52.004400] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:48.641 [2024-11-18 03:07:52.004467] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:48.641 03:07:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.641 03:07:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:48.641 03:07:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:48.641 03:07:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.641 03:07:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:48.641 03:07:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.641 03:07:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.641 03:07:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.641 03:07:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:48.641 03:07:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:48.642 03:07:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:48.642 03:07:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73189 00:07:48.642 03:07:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73189 ']' 00:07:48.642 03:07:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73189 00:07:48.642 03:07:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:48.642 03:07:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:48.642 03:07:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73189 00:07:48.642 killing process with pid 73189 00:07:48.642 03:07:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:48.642 03:07:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:48.642 03:07:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73189' 00:07:48.642 03:07:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73189 00:07:48.642 [2024-11-18 03:07:52.114304] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:48.642 03:07:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73189 00:07:48.642 [2024-11-18 03:07:52.115357] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:48.900 03:07:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:48.900 00:07:48.900 real 0m3.815s 00:07:48.900 user 0m5.963s 00:07:48.900 sys 0m0.778s 00:07:48.900 03:07:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.900 03:07:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.900 ************************************ 00:07:48.900 END TEST raid_state_function_test 00:07:48.900 ************************************ 00:07:48.900 03:07:52 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:48.900 03:07:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:48.900 03:07:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.900 03:07:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:48.900 ************************************ 00:07:48.900 START TEST raid_state_function_test_sb 00:07:48.900 ************************************ 00:07:48.900 03:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:07:48.900 03:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:48.900 03:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:48.900 03:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:48.900 03:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:48.900 03:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:48.900 03:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:48.900 03:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:48.901 03:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:48.901 03:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:48.901 03:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:48.901 03:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:48.901 03:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:48.901 03:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:48.901 03:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:48.901 03:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:48.901 03:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:48.901 03:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:48.901 03:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:48.901 03:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:48.901 03:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:48.901 03:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:48.901 03:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:48.901 03:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:48.901 Process raid pid: 73430 00:07:48.901 03:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73430 00:07:48.901 03:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:48.901 03:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73430' 00:07:48.901 03:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73430 00:07:48.901 03:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73430 ']' 00:07:48.901 03:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.901 03:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.901 03:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.901 03:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.901 03:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.159 [2024-11-18 03:07:52.522705] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:49.159 [2024-11-18 03:07:52.522838] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.159 [2024-11-18 03:07:52.683879] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.159 [2024-11-18 03:07:52.734032] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.418 [2024-11-18 03:07:52.776819] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.418 [2024-11-18 03:07:52.776861] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.985 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:49.985 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:49.985 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:49.985 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.985 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.985 [2024-11-18 03:07:53.366920] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:49.985 [2024-11-18 03:07:53.366993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:49.985 [2024-11-18 03:07:53.367013] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:49.985 [2024-11-18 03:07:53.367024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:49.985 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.985 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:49.985 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.985 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:49.985 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:49.985 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.985 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.986 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.986 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.986 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.986 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.986 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.986 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.986 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.986 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.986 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.986 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.986 "name": "Existed_Raid", 00:07:49.986 "uuid": "1687efde-9ff7-423e-abfb-4f8791021c43", 00:07:49.986 "strip_size_kb": 64, 00:07:49.986 "state": "configuring", 00:07:49.986 "raid_level": "concat", 00:07:49.986 "superblock": true, 00:07:49.986 "num_base_bdevs": 2, 00:07:49.986 "num_base_bdevs_discovered": 0, 00:07:49.986 "num_base_bdevs_operational": 2, 00:07:49.986 "base_bdevs_list": [ 00:07:49.986 { 00:07:49.986 "name": "BaseBdev1", 00:07:49.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.986 "is_configured": false, 00:07:49.986 "data_offset": 0, 00:07:49.986 "data_size": 0 00:07:49.986 }, 00:07:49.986 { 00:07:49.986 "name": "BaseBdev2", 00:07:49.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.986 "is_configured": false, 00:07:49.986 "data_offset": 0, 00:07:49.986 "data_size": 0 00:07:49.986 } 00:07:49.986 ] 00:07:49.986 }' 00:07:49.986 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.986 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.554 [2024-11-18 03:07:53.830068] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:50.554 [2024-11-18 03:07:53.830183] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.554 [2024-11-18 03:07:53.842088] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:50.554 [2024-11-18 03:07:53.842133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:50.554 [2024-11-18 03:07:53.842142] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.554 [2024-11-18 03:07:53.842151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.554 [2024-11-18 03:07:53.863077] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:50.554 BaseBdev1 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.554 [ 00:07:50.554 { 00:07:50.554 "name": "BaseBdev1", 00:07:50.554 "aliases": [ 00:07:50.554 "655572e3-06eb-4496-ad5c-02d8f333787d" 00:07:50.554 ], 00:07:50.554 "product_name": "Malloc disk", 00:07:50.554 "block_size": 512, 00:07:50.554 "num_blocks": 65536, 00:07:50.554 "uuid": "655572e3-06eb-4496-ad5c-02d8f333787d", 00:07:50.554 "assigned_rate_limits": { 00:07:50.554 "rw_ios_per_sec": 0, 00:07:50.554 "rw_mbytes_per_sec": 0, 00:07:50.554 "r_mbytes_per_sec": 0, 00:07:50.554 "w_mbytes_per_sec": 0 00:07:50.554 }, 00:07:50.554 "claimed": true, 00:07:50.554 "claim_type": "exclusive_write", 00:07:50.554 "zoned": false, 00:07:50.554 "supported_io_types": { 00:07:50.554 "read": true, 00:07:50.554 "write": true, 00:07:50.554 "unmap": true, 00:07:50.554 "flush": true, 00:07:50.554 "reset": true, 00:07:50.554 "nvme_admin": false, 00:07:50.554 "nvme_io": false, 00:07:50.554 "nvme_io_md": false, 00:07:50.554 "write_zeroes": true, 00:07:50.554 "zcopy": true, 00:07:50.554 "get_zone_info": false, 00:07:50.554 "zone_management": false, 00:07:50.554 "zone_append": false, 00:07:50.554 "compare": false, 00:07:50.554 "compare_and_write": false, 00:07:50.554 "abort": true, 00:07:50.554 "seek_hole": false, 00:07:50.554 "seek_data": false, 00:07:50.554 "copy": true, 00:07:50.554 "nvme_iov_md": false 00:07:50.554 }, 00:07:50.554 "memory_domains": [ 00:07:50.554 { 00:07:50.554 "dma_device_id": "system", 00:07:50.554 "dma_device_type": 1 00:07:50.554 }, 00:07:50.554 { 00:07:50.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.554 "dma_device_type": 2 00:07:50.554 } 00:07:50.554 ], 00:07:50.554 "driver_specific": {} 00:07:50.554 } 00:07:50.554 ] 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.554 "name": "Existed_Raid", 00:07:50.554 "uuid": "d0e55f33-f613-430e-9a54-1a5ddd3db283", 00:07:50.554 "strip_size_kb": 64, 00:07:50.554 "state": "configuring", 00:07:50.554 "raid_level": "concat", 00:07:50.554 "superblock": true, 00:07:50.554 "num_base_bdevs": 2, 00:07:50.554 "num_base_bdevs_discovered": 1, 00:07:50.554 "num_base_bdevs_operational": 2, 00:07:50.554 "base_bdevs_list": [ 00:07:50.554 { 00:07:50.554 "name": "BaseBdev1", 00:07:50.554 "uuid": "655572e3-06eb-4496-ad5c-02d8f333787d", 00:07:50.554 "is_configured": true, 00:07:50.554 "data_offset": 2048, 00:07:50.554 "data_size": 63488 00:07:50.554 }, 00:07:50.554 { 00:07:50.554 "name": "BaseBdev2", 00:07:50.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.554 "is_configured": false, 00:07:50.554 "data_offset": 0, 00:07:50.554 "data_size": 0 00:07:50.554 } 00:07:50.554 ] 00:07:50.554 }' 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.554 03:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.813 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:50.813 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.813 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.813 [2024-11-18 03:07:54.346371] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:50.813 [2024-11-18 03:07:54.346507] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:50.813 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.813 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:50.813 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.813 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.813 [2024-11-18 03:07:54.358377] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:50.813 [2024-11-18 03:07:54.360477] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.813 [2024-11-18 03:07:54.360562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.813 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.813 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:50.813 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:50.813 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:50.813 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.813 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.813 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:50.813 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.813 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.814 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.814 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.814 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.814 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.814 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.814 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.814 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.814 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.072 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.072 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.072 "name": "Existed_Raid", 00:07:51.072 "uuid": "91412ab8-ec36-459e-9af8-2aeb5e96fd0f", 00:07:51.072 "strip_size_kb": 64, 00:07:51.072 "state": "configuring", 00:07:51.072 "raid_level": "concat", 00:07:51.072 "superblock": true, 00:07:51.072 "num_base_bdevs": 2, 00:07:51.072 "num_base_bdevs_discovered": 1, 00:07:51.072 "num_base_bdevs_operational": 2, 00:07:51.072 "base_bdevs_list": [ 00:07:51.072 { 00:07:51.072 "name": "BaseBdev1", 00:07:51.072 "uuid": "655572e3-06eb-4496-ad5c-02d8f333787d", 00:07:51.072 "is_configured": true, 00:07:51.072 "data_offset": 2048, 00:07:51.072 "data_size": 63488 00:07:51.072 }, 00:07:51.072 { 00:07:51.072 "name": "BaseBdev2", 00:07:51.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.072 "is_configured": false, 00:07:51.072 "data_offset": 0, 00:07:51.072 "data_size": 0 00:07:51.072 } 00:07:51.072 ] 00:07:51.072 }' 00:07:51.072 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.072 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.331 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:51.331 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.331 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.331 [2024-11-18 03:07:54.800048] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:51.331 [2024-11-18 03:07:54.800336] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:51.331 [2024-11-18 03:07:54.800394] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:51.331 [2024-11-18 03:07:54.800728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:51.331 [2024-11-18 03:07:54.800953] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:51.331 [2024-11-18 03:07:54.801038] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:51.331 BaseBdev2 00:07:51.331 [2024-11-18 03:07:54.801270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.331 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.331 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:51.331 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:51.331 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:51.331 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:51.331 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:51.331 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:51.331 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:51.331 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.331 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.331 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.331 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:51.331 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.331 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.331 [ 00:07:51.331 { 00:07:51.331 "name": "BaseBdev2", 00:07:51.331 "aliases": [ 00:07:51.331 "7b9ae854-32fd-40ac-a2b6-40f1499b0274" 00:07:51.331 ], 00:07:51.331 "product_name": "Malloc disk", 00:07:51.331 "block_size": 512, 00:07:51.331 "num_blocks": 65536, 00:07:51.331 "uuid": "7b9ae854-32fd-40ac-a2b6-40f1499b0274", 00:07:51.331 "assigned_rate_limits": { 00:07:51.331 "rw_ios_per_sec": 0, 00:07:51.331 "rw_mbytes_per_sec": 0, 00:07:51.331 "r_mbytes_per_sec": 0, 00:07:51.331 "w_mbytes_per_sec": 0 00:07:51.331 }, 00:07:51.331 "claimed": true, 00:07:51.331 "claim_type": "exclusive_write", 00:07:51.331 "zoned": false, 00:07:51.331 "supported_io_types": { 00:07:51.331 "read": true, 00:07:51.331 "write": true, 00:07:51.331 "unmap": true, 00:07:51.331 "flush": true, 00:07:51.331 "reset": true, 00:07:51.331 "nvme_admin": false, 00:07:51.331 "nvme_io": false, 00:07:51.331 "nvme_io_md": false, 00:07:51.331 "write_zeroes": true, 00:07:51.331 "zcopy": true, 00:07:51.331 "get_zone_info": false, 00:07:51.331 "zone_management": false, 00:07:51.331 "zone_append": false, 00:07:51.331 "compare": false, 00:07:51.331 "compare_and_write": false, 00:07:51.331 "abort": true, 00:07:51.331 "seek_hole": false, 00:07:51.331 "seek_data": false, 00:07:51.331 "copy": true, 00:07:51.331 "nvme_iov_md": false 00:07:51.331 }, 00:07:51.331 "memory_domains": [ 00:07:51.331 { 00:07:51.331 "dma_device_id": "system", 00:07:51.331 "dma_device_type": 1 00:07:51.331 }, 00:07:51.331 { 00:07:51.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.331 "dma_device_type": 2 00:07:51.331 } 00:07:51.331 ], 00:07:51.331 "driver_specific": {} 00:07:51.331 } 00:07:51.331 ] 00:07:51.331 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.331 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:51.331 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:51.331 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:51.331 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:51.331 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.331 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.332 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:51.332 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.332 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.332 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.332 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.332 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.332 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.332 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.332 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.332 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.332 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.332 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.332 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.332 "name": "Existed_Raid", 00:07:51.332 "uuid": "91412ab8-ec36-459e-9af8-2aeb5e96fd0f", 00:07:51.332 "strip_size_kb": 64, 00:07:51.332 "state": "online", 00:07:51.332 "raid_level": "concat", 00:07:51.332 "superblock": true, 00:07:51.332 "num_base_bdevs": 2, 00:07:51.332 "num_base_bdevs_discovered": 2, 00:07:51.332 "num_base_bdevs_operational": 2, 00:07:51.332 "base_bdevs_list": [ 00:07:51.332 { 00:07:51.332 "name": "BaseBdev1", 00:07:51.332 "uuid": "655572e3-06eb-4496-ad5c-02d8f333787d", 00:07:51.332 "is_configured": true, 00:07:51.332 "data_offset": 2048, 00:07:51.332 "data_size": 63488 00:07:51.332 }, 00:07:51.332 { 00:07:51.332 "name": "BaseBdev2", 00:07:51.332 "uuid": "7b9ae854-32fd-40ac-a2b6-40f1499b0274", 00:07:51.332 "is_configured": true, 00:07:51.332 "data_offset": 2048, 00:07:51.332 "data_size": 63488 00:07:51.332 } 00:07:51.332 ] 00:07:51.332 }' 00:07:51.332 03:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.332 03:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.900 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:51.900 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:51.900 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:51.900 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:51.900 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:51.900 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:51.900 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:51.900 03:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.900 03:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.900 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:51.900 [2024-11-18 03:07:55.307577] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.900 03:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.900 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:51.900 "name": "Existed_Raid", 00:07:51.900 "aliases": [ 00:07:51.900 "91412ab8-ec36-459e-9af8-2aeb5e96fd0f" 00:07:51.900 ], 00:07:51.900 "product_name": "Raid Volume", 00:07:51.900 "block_size": 512, 00:07:51.900 "num_blocks": 126976, 00:07:51.900 "uuid": "91412ab8-ec36-459e-9af8-2aeb5e96fd0f", 00:07:51.900 "assigned_rate_limits": { 00:07:51.900 "rw_ios_per_sec": 0, 00:07:51.900 "rw_mbytes_per_sec": 0, 00:07:51.900 "r_mbytes_per_sec": 0, 00:07:51.900 "w_mbytes_per_sec": 0 00:07:51.900 }, 00:07:51.900 "claimed": false, 00:07:51.900 "zoned": false, 00:07:51.900 "supported_io_types": { 00:07:51.900 "read": true, 00:07:51.900 "write": true, 00:07:51.900 "unmap": true, 00:07:51.900 "flush": true, 00:07:51.900 "reset": true, 00:07:51.900 "nvme_admin": false, 00:07:51.900 "nvme_io": false, 00:07:51.900 "nvme_io_md": false, 00:07:51.900 "write_zeroes": true, 00:07:51.900 "zcopy": false, 00:07:51.900 "get_zone_info": false, 00:07:51.900 "zone_management": false, 00:07:51.900 "zone_append": false, 00:07:51.900 "compare": false, 00:07:51.900 "compare_and_write": false, 00:07:51.900 "abort": false, 00:07:51.900 "seek_hole": false, 00:07:51.900 "seek_data": false, 00:07:51.900 "copy": false, 00:07:51.900 "nvme_iov_md": false 00:07:51.900 }, 00:07:51.900 "memory_domains": [ 00:07:51.900 { 00:07:51.900 "dma_device_id": "system", 00:07:51.900 "dma_device_type": 1 00:07:51.900 }, 00:07:51.900 { 00:07:51.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.900 "dma_device_type": 2 00:07:51.900 }, 00:07:51.900 { 00:07:51.900 "dma_device_id": "system", 00:07:51.900 "dma_device_type": 1 00:07:51.900 }, 00:07:51.900 { 00:07:51.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.900 "dma_device_type": 2 00:07:51.900 } 00:07:51.900 ], 00:07:51.900 "driver_specific": { 00:07:51.900 "raid": { 00:07:51.900 "uuid": "91412ab8-ec36-459e-9af8-2aeb5e96fd0f", 00:07:51.900 "strip_size_kb": 64, 00:07:51.900 "state": "online", 00:07:51.900 "raid_level": "concat", 00:07:51.900 "superblock": true, 00:07:51.900 "num_base_bdevs": 2, 00:07:51.900 "num_base_bdevs_discovered": 2, 00:07:51.901 "num_base_bdevs_operational": 2, 00:07:51.901 "base_bdevs_list": [ 00:07:51.901 { 00:07:51.901 "name": "BaseBdev1", 00:07:51.901 "uuid": "655572e3-06eb-4496-ad5c-02d8f333787d", 00:07:51.901 "is_configured": true, 00:07:51.901 "data_offset": 2048, 00:07:51.901 "data_size": 63488 00:07:51.901 }, 00:07:51.901 { 00:07:51.901 "name": "BaseBdev2", 00:07:51.901 "uuid": "7b9ae854-32fd-40ac-a2b6-40f1499b0274", 00:07:51.901 "is_configured": true, 00:07:51.901 "data_offset": 2048, 00:07:51.901 "data_size": 63488 00:07:51.901 } 00:07:51.901 ] 00:07:51.901 } 00:07:51.901 } 00:07:51.901 }' 00:07:51.901 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:51.901 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:51.901 BaseBdev2' 00:07:51.901 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.901 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:51.901 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:51.901 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:51.901 03:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.901 03:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.901 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.901 03:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.901 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:51.901 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:51.901 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.160 [2024-11-18 03:07:55.527033] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:52.160 [2024-11-18 03:07:55.527135] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:52.160 [2024-11-18 03:07:55.527240] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.160 "name": "Existed_Raid", 00:07:52.160 "uuid": "91412ab8-ec36-459e-9af8-2aeb5e96fd0f", 00:07:52.160 "strip_size_kb": 64, 00:07:52.160 "state": "offline", 00:07:52.160 "raid_level": "concat", 00:07:52.160 "superblock": true, 00:07:52.160 "num_base_bdevs": 2, 00:07:52.160 "num_base_bdevs_discovered": 1, 00:07:52.160 "num_base_bdevs_operational": 1, 00:07:52.160 "base_bdevs_list": [ 00:07:52.160 { 00:07:52.160 "name": null, 00:07:52.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.160 "is_configured": false, 00:07:52.160 "data_offset": 0, 00:07:52.160 "data_size": 63488 00:07:52.160 }, 00:07:52.160 { 00:07:52.160 "name": "BaseBdev2", 00:07:52.160 "uuid": "7b9ae854-32fd-40ac-a2b6-40f1499b0274", 00:07:52.160 "is_configured": true, 00:07:52.160 "data_offset": 2048, 00:07:52.160 "data_size": 63488 00:07:52.160 } 00:07:52.160 ] 00:07:52.160 }' 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.160 03:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.419 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:52.419 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:52.419 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.419 03:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:52.419 03:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.419 03:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.678 03:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.678 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:52.678 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:52.678 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:52.678 03:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.678 03:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.678 [2024-11-18 03:07:56.034009] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:52.678 [2024-11-18 03:07:56.034122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:52.678 03:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.678 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:52.678 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:52.678 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.678 03:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.678 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:52.678 03:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.678 03:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.678 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:52.678 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:52.678 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:52.678 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73430 00:07:52.678 03:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73430 ']' 00:07:52.678 03:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73430 00:07:52.678 03:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:52.678 03:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:52.678 03:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73430 00:07:52.678 killing process with pid 73430 00:07:52.678 03:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:52.678 03:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:52.678 03:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73430' 00:07:52.678 03:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73430 00:07:52.678 [2024-11-18 03:07:56.131421] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:52.678 03:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73430 00:07:52.678 [2024-11-18 03:07:56.132470] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:52.937 03:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:52.937 00:07:52.937 real 0m3.947s 00:07:52.937 user 0m6.224s 00:07:52.937 sys 0m0.775s 00:07:52.937 03:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.937 03:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.937 ************************************ 00:07:52.937 END TEST raid_state_function_test_sb 00:07:52.937 ************************************ 00:07:52.937 03:07:56 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:52.937 03:07:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:52.937 03:07:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.937 03:07:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:52.937 ************************************ 00:07:52.937 START TEST raid_superblock_test 00:07:52.937 ************************************ 00:07:52.937 03:07:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:07:52.937 03:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:52.937 03:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:52.937 03:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:52.937 03:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:52.937 03:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:52.937 03:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:52.937 03:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:52.937 03:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:52.937 03:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:52.937 03:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:52.937 03:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:52.937 03:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:52.937 03:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:52.937 03:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:52.937 03:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:52.937 03:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:52.937 03:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73661 00:07:52.937 03:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:52.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.937 03:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73661 00:07:52.937 03:07:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 73661 ']' 00:07:52.937 03:07:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.937 03:07:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:52.937 03:07:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.937 03:07:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:52.937 03:07:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.196 [2024-11-18 03:07:56.538350] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:53.196 [2024-11-18 03:07:56.538475] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73661 ] 00:07:53.196 [2024-11-18 03:07:56.682150] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.196 [2024-11-18 03:07:56.733037] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.454 [2024-11-18 03:07:56.775262] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.454 [2024-11-18 03:07:56.775301] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.028 malloc1 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.028 [2024-11-18 03:07:57.405549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:54.028 [2024-11-18 03:07:57.405626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.028 [2024-11-18 03:07:57.405649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:54.028 [2024-11-18 03:07:57.405664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.028 [2024-11-18 03:07:57.407953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.028 [2024-11-18 03:07:57.408006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:54.028 pt1 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.028 malloc2 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.028 [2024-11-18 03:07:57.445167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:54.028 [2024-11-18 03:07:57.445232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.028 [2024-11-18 03:07:57.445251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:54.028 [2024-11-18 03:07:57.445262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.028 [2024-11-18 03:07:57.447566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.028 [2024-11-18 03:07:57.447612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:54.028 pt2 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.028 [2024-11-18 03:07:57.457200] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:54.028 [2024-11-18 03:07:57.459068] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:54.028 [2024-11-18 03:07:57.459239] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:54.028 [2024-11-18 03:07:57.459256] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:54.028 [2024-11-18 03:07:57.459552] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:54.028 [2024-11-18 03:07:57.459689] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:54.028 [2024-11-18 03:07:57.459703] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:54.028 [2024-11-18 03:07:57.459838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.028 "name": "raid_bdev1", 00:07:54.028 "uuid": "27780605-b68e-467a-a7a7-480f5d2e60ef", 00:07:54.028 "strip_size_kb": 64, 00:07:54.028 "state": "online", 00:07:54.028 "raid_level": "concat", 00:07:54.028 "superblock": true, 00:07:54.028 "num_base_bdevs": 2, 00:07:54.028 "num_base_bdevs_discovered": 2, 00:07:54.028 "num_base_bdevs_operational": 2, 00:07:54.028 "base_bdevs_list": [ 00:07:54.028 { 00:07:54.028 "name": "pt1", 00:07:54.028 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:54.028 "is_configured": true, 00:07:54.028 "data_offset": 2048, 00:07:54.028 "data_size": 63488 00:07:54.028 }, 00:07:54.028 { 00:07:54.028 "name": "pt2", 00:07:54.028 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:54.028 "is_configured": true, 00:07:54.028 "data_offset": 2048, 00:07:54.028 "data_size": 63488 00:07:54.028 } 00:07:54.028 ] 00:07:54.028 }' 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.028 03:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.599 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:54.599 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:54.599 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:54.599 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:54.599 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:54.599 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:54.599 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:54.599 03:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.599 03:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.599 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:54.599 [2024-11-18 03:07:57.892836] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:54.599 03:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.599 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:54.599 "name": "raid_bdev1", 00:07:54.599 "aliases": [ 00:07:54.599 "27780605-b68e-467a-a7a7-480f5d2e60ef" 00:07:54.599 ], 00:07:54.599 "product_name": "Raid Volume", 00:07:54.599 "block_size": 512, 00:07:54.599 "num_blocks": 126976, 00:07:54.599 "uuid": "27780605-b68e-467a-a7a7-480f5d2e60ef", 00:07:54.599 "assigned_rate_limits": { 00:07:54.599 "rw_ios_per_sec": 0, 00:07:54.599 "rw_mbytes_per_sec": 0, 00:07:54.599 "r_mbytes_per_sec": 0, 00:07:54.599 "w_mbytes_per_sec": 0 00:07:54.599 }, 00:07:54.599 "claimed": false, 00:07:54.599 "zoned": false, 00:07:54.599 "supported_io_types": { 00:07:54.599 "read": true, 00:07:54.599 "write": true, 00:07:54.599 "unmap": true, 00:07:54.599 "flush": true, 00:07:54.599 "reset": true, 00:07:54.599 "nvme_admin": false, 00:07:54.599 "nvme_io": false, 00:07:54.599 "nvme_io_md": false, 00:07:54.599 "write_zeroes": true, 00:07:54.599 "zcopy": false, 00:07:54.599 "get_zone_info": false, 00:07:54.599 "zone_management": false, 00:07:54.599 "zone_append": false, 00:07:54.599 "compare": false, 00:07:54.599 "compare_and_write": false, 00:07:54.599 "abort": false, 00:07:54.599 "seek_hole": false, 00:07:54.599 "seek_data": false, 00:07:54.599 "copy": false, 00:07:54.599 "nvme_iov_md": false 00:07:54.599 }, 00:07:54.599 "memory_domains": [ 00:07:54.599 { 00:07:54.599 "dma_device_id": "system", 00:07:54.599 "dma_device_type": 1 00:07:54.599 }, 00:07:54.599 { 00:07:54.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.599 "dma_device_type": 2 00:07:54.599 }, 00:07:54.599 { 00:07:54.599 "dma_device_id": "system", 00:07:54.599 "dma_device_type": 1 00:07:54.599 }, 00:07:54.599 { 00:07:54.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.599 "dma_device_type": 2 00:07:54.599 } 00:07:54.599 ], 00:07:54.599 "driver_specific": { 00:07:54.599 "raid": { 00:07:54.599 "uuid": "27780605-b68e-467a-a7a7-480f5d2e60ef", 00:07:54.599 "strip_size_kb": 64, 00:07:54.599 "state": "online", 00:07:54.599 "raid_level": "concat", 00:07:54.599 "superblock": true, 00:07:54.599 "num_base_bdevs": 2, 00:07:54.599 "num_base_bdevs_discovered": 2, 00:07:54.599 "num_base_bdevs_operational": 2, 00:07:54.599 "base_bdevs_list": [ 00:07:54.599 { 00:07:54.599 "name": "pt1", 00:07:54.599 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:54.599 "is_configured": true, 00:07:54.599 "data_offset": 2048, 00:07:54.599 "data_size": 63488 00:07:54.599 }, 00:07:54.599 { 00:07:54.599 "name": "pt2", 00:07:54.599 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:54.599 "is_configured": true, 00:07:54.599 "data_offset": 2048, 00:07:54.599 "data_size": 63488 00:07:54.599 } 00:07:54.599 ] 00:07:54.599 } 00:07:54.599 } 00:07:54.599 }' 00:07:54.599 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:54.599 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:54.599 pt2' 00:07:54.599 03:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.599 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:54.599 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.599 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:54.599 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.599 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.600 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.600 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.600 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.600 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.600 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.600 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:54.600 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.600 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.600 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.600 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.600 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.600 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.600 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:54.600 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:54.600 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.600 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.600 [2024-11-18 03:07:58.104406] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:54.600 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.600 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=27780605-b68e-467a-a7a7-480f5d2e60ef 00:07:54.600 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 27780605-b68e-467a-a7a7-480f5d2e60ef ']' 00:07:54.600 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:54.600 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.600 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.600 [2024-11-18 03:07:58.152043] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:54.600 [2024-11-18 03:07:58.152075] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:54.600 [2024-11-18 03:07:58.152165] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:54.600 [2024-11-18 03:07:58.152221] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:54.600 [2024-11-18 03:07:58.152241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:54.600 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.600 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:54.600 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.600 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.600 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.600 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.860 [2024-11-18 03:07:58.283879] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:54.860 [2024-11-18 03:07:58.285927] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:54.860 [2024-11-18 03:07:58.286085] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:54.860 [2024-11-18 03:07:58.286150] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:54.860 [2024-11-18 03:07:58.286172] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:54.860 [2024-11-18 03:07:58.286182] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:07:54.860 request: 00:07:54.860 { 00:07:54.860 "name": "raid_bdev1", 00:07:54.860 "raid_level": "concat", 00:07:54.860 "base_bdevs": [ 00:07:54.860 "malloc1", 00:07:54.860 "malloc2" 00:07:54.860 ], 00:07:54.860 "strip_size_kb": 64, 00:07:54.860 "superblock": false, 00:07:54.860 "method": "bdev_raid_create", 00:07:54.860 "req_id": 1 00:07:54.860 } 00:07:54.860 Got JSON-RPC error response 00:07:54.860 response: 00:07:54.860 { 00:07:54.860 "code": -17, 00:07:54.860 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:54.860 } 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.860 [2024-11-18 03:07:58.339727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:54.860 [2024-11-18 03:07:58.339848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.860 [2024-11-18 03:07:58.339906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:54.860 [2024-11-18 03:07:58.339952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.860 [2024-11-18 03:07:58.342400] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.860 [2024-11-18 03:07:58.342483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:54.860 [2024-11-18 03:07:58.342615] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:54.860 [2024-11-18 03:07:58.342704] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:54.860 pt1 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.860 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.861 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.861 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.861 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.861 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:54.861 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.861 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.861 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.861 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.861 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.861 "name": "raid_bdev1", 00:07:54.861 "uuid": "27780605-b68e-467a-a7a7-480f5d2e60ef", 00:07:54.861 "strip_size_kb": 64, 00:07:54.861 "state": "configuring", 00:07:54.861 "raid_level": "concat", 00:07:54.861 "superblock": true, 00:07:54.861 "num_base_bdevs": 2, 00:07:54.861 "num_base_bdevs_discovered": 1, 00:07:54.861 "num_base_bdevs_operational": 2, 00:07:54.861 "base_bdevs_list": [ 00:07:54.861 { 00:07:54.861 "name": "pt1", 00:07:54.861 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:54.861 "is_configured": true, 00:07:54.861 "data_offset": 2048, 00:07:54.861 "data_size": 63488 00:07:54.861 }, 00:07:54.861 { 00:07:54.861 "name": null, 00:07:54.861 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:54.861 "is_configured": false, 00:07:54.861 "data_offset": 2048, 00:07:54.861 "data_size": 63488 00:07:54.861 } 00:07:54.861 ] 00:07:54.861 }' 00:07:54.861 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.861 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.430 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:55.430 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:55.430 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:55.430 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:55.430 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.430 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.430 [2024-11-18 03:07:58.787003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:55.430 [2024-11-18 03:07:58.787080] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.430 [2024-11-18 03:07:58.787108] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:55.430 [2024-11-18 03:07:58.787118] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.430 [2024-11-18 03:07:58.787598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.430 [2024-11-18 03:07:58.787630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:55.430 [2024-11-18 03:07:58.787719] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:55.430 [2024-11-18 03:07:58.787745] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:55.430 [2024-11-18 03:07:58.787842] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:55.430 [2024-11-18 03:07:58.787856] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:55.430 [2024-11-18 03:07:58.788147] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:55.430 [2024-11-18 03:07:58.788280] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:55.430 [2024-11-18 03:07:58.788297] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:55.430 [2024-11-18 03:07:58.788412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.430 pt2 00:07:55.430 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.430 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:55.430 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:55.430 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:55.430 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.430 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.430 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:55.430 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.430 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.430 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.430 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.430 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.430 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.430 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.430 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.430 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.430 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.430 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.430 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.430 "name": "raid_bdev1", 00:07:55.430 "uuid": "27780605-b68e-467a-a7a7-480f5d2e60ef", 00:07:55.430 "strip_size_kb": 64, 00:07:55.430 "state": "online", 00:07:55.430 "raid_level": "concat", 00:07:55.430 "superblock": true, 00:07:55.430 "num_base_bdevs": 2, 00:07:55.430 "num_base_bdevs_discovered": 2, 00:07:55.430 "num_base_bdevs_operational": 2, 00:07:55.430 "base_bdevs_list": [ 00:07:55.430 { 00:07:55.430 "name": "pt1", 00:07:55.430 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.430 "is_configured": true, 00:07:55.430 "data_offset": 2048, 00:07:55.430 "data_size": 63488 00:07:55.430 }, 00:07:55.430 { 00:07:55.430 "name": "pt2", 00:07:55.430 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.430 "is_configured": true, 00:07:55.430 "data_offset": 2048, 00:07:55.430 "data_size": 63488 00:07:55.430 } 00:07:55.430 ] 00:07:55.430 }' 00:07:55.430 03:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.430 03:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.690 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:55.690 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:55.690 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:55.690 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:55.690 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:55.690 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:55.690 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:55.690 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:55.690 03:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.690 03:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.948 [2024-11-18 03:07:59.266460] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:55.948 03:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.948 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:55.948 "name": "raid_bdev1", 00:07:55.948 "aliases": [ 00:07:55.948 "27780605-b68e-467a-a7a7-480f5d2e60ef" 00:07:55.948 ], 00:07:55.948 "product_name": "Raid Volume", 00:07:55.949 "block_size": 512, 00:07:55.949 "num_blocks": 126976, 00:07:55.949 "uuid": "27780605-b68e-467a-a7a7-480f5d2e60ef", 00:07:55.949 "assigned_rate_limits": { 00:07:55.949 "rw_ios_per_sec": 0, 00:07:55.949 "rw_mbytes_per_sec": 0, 00:07:55.949 "r_mbytes_per_sec": 0, 00:07:55.949 "w_mbytes_per_sec": 0 00:07:55.949 }, 00:07:55.949 "claimed": false, 00:07:55.949 "zoned": false, 00:07:55.949 "supported_io_types": { 00:07:55.949 "read": true, 00:07:55.949 "write": true, 00:07:55.949 "unmap": true, 00:07:55.949 "flush": true, 00:07:55.949 "reset": true, 00:07:55.949 "nvme_admin": false, 00:07:55.949 "nvme_io": false, 00:07:55.949 "nvme_io_md": false, 00:07:55.949 "write_zeroes": true, 00:07:55.949 "zcopy": false, 00:07:55.949 "get_zone_info": false, 00:07:55.949 "zone_management": false, 00:07:55.949 "zone_append": false, 00:07:55.949 "compare": false, 00:07:55.949 "compare_and_write": false, 00:07:55.949 "abort": false, 00:07:55.949 "seek_hole": false, 00:07:55.949 "seek_data": false, 00:07:55.949 "copy": false, 00:07:55.949 "nvme_iov_md": false 00:07:55.949 }, 00:07:55.949 "memory_domains": [ 00:07:55.949 { 00:07:55.949 "dma_device_id": "system", 00:07:55.949 "dma_device_type": 1 00:07:55.949 }, 00:07:55.949 { 00:07:55.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.949 "dma_device_type": 2 00:07:55.949 }, 00:07:55.949 { 00:07:55.949 "dma_device_id": "system", 00:07:55.949 "dma_device_type": 1 00:07:55.949 }, 00:07:55.949 { 00:07:55.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.949 "dma_device_type": 2 00:07:55.949 } 00:07:55.949 ], 00:07:55.949 "driver_specific": { 00:07:55.949 "raid": { 00:07:55.949 "uuid": "27780605-b68e-467a-a7a7-480f5d2e60ef", 00:07:55.949 "strip_size_kb": 64, 00:07:55.949 "state": "online", 00:07:55.949 "raid_level": "concat", 00:07:55.949 "superblock": true, 00:07:55.949 "num_base_bdevs": 2, 00:07:55.949 "num_base_bdevs_discovered": 2, 00:07:55.949 "num_base_bdevs_operational": 2, 00:07:55.949 "base_bdevs_list": [ 00:07:55.949 { 00:07:55.949 "name": "pt1", 00:07:55.949 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.949 "is_configured": true, 00:07:55.949 "data_offset": 2048, 00:07:55.949 "data_size": 63488 00:07:55.949 }, 00:07:55.949 { 00:07:55.949 "name": "pt2", 00:07:55.949 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.949 "is_configured": true, 00:07:55.949 "data_offset": 2048, 00:07:55.949 "data_size": 63488 00:07:55.949 } 00:07:55.949 ] 00:07:55.949 } 00:07:55.949 } 00:07:55.949 }' 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:55.949 pt2' 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.949 [2024-11-18 03:07:59.482097] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 27780605-b68e-467a-a7a7-480f5d2e60ef '!=' 27780605-b68e-467a-a7a7-480f5d2e60ef ']' 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73661 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 73661 ']' 00:07:55.949 03:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 73661 00:07:56.208 03:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:56.208 03:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:56.208 03:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73661 00:07:56.208 killing process with pid 73661 00:07:56.208 03:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:56.208 03:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:56.208 03:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73661' 00:07:56.208 03:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 73661 00:07:56.208 03:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 73661 00:07:56.208 [2024-11-18 03:07:59.553851] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:56.208 [2024-11-18 03:07:59.553956] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.208 [2024-11-18 03:07:59.554046] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.208 [2024-11-18 03:07:59.554062] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:56.208 [2024-11-18 03:07:59.577451] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:56.486 03:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:56.486 00:07:56.486 real 0m3.375s 00:07:56.486 user 0m5.220s 00:07:56.486 sys 0m0.719s 00:07:56.486 03:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.486 03:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.486 ************************************ 00:07:56.486 END TEST raid_superblock_test 00:07:56.486 ************************************ 00:07:56.486 03:07:59 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:56.486 03:07:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:56.486 03:07:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.486 03:07:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:56.486 ************************************ 00:07:56.486 START TEST raid_read_error_test 00:07:56.486 ************************************ 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.yU6yt1x4cQ 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73866 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73866 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 73866 ']' 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:56.486 03:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.486 [2024-11-18 03:07:59.990097] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:56.486 [2024-11-18 03:07:59.990233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73866 ] 00:07:56.744 [2024-11-18 03:08:00.135483] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.744 [2024-11-18 03:08:00.185038] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.744 [2024-11-18 03:08:00.227575] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.744 [2024-11-18 03:08:00.227615] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.312 03:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:57.312 03:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:57.312 03:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:57.312 03:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:57.312 03:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.312 03:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.312 BaseBdev1_malloc 00:07:57.312 03:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.312 03:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:57.312 03:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.312 03:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.312 true 00:07:57.312 03:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.312 03:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:57.312 03:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.312 03:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.312 [2024-11-18 03:08:00.886137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:57.312 [2024-11-18 03:08:00.886195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.312 [2024-11-18 03:08:00.886217] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:57.312 [2024-11-18 03:08:00.886227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.571 [2024-11-18 03:08:00.888571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.571 [2024-11-18 03:08:00.888613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:57.571 BaseBdev1 00:07:57.571 03:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.571 03:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:57.571 03:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:57.571 03:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.571 03:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.571 BaseBdev2_malloc 00:07:57.571 03:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.571 03:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:57.571 03:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.571 03:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.571 true 00:07:57.571 03:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.571 03:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:57.571 03:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.571 03:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.571 [2024-11-18 03:08:00.938124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:57.571 [2024-11-18 03:08:00.938186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.571 [2024-11-18 03:08:00.938209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:57.571 [2024-11-18 03:08:00.938218] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.571 [2024-11-18 03:08:00.940662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.571 [2024-11-18 03:08:00.940703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:57.571 BaseBdev2 00:07:57.571 03:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.571 03:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:57.571 03:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.571 03:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.571 [2024-11-18 03:08:00.950154] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:57.571 [2024-11-18 03:08:00.952359] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:57.571 [2024-11-18 03:08:00.952607] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:57.571 [2024-11-18 03:08:00.952656] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:57.571 [2024-11-18 03:08:00.952996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:57.571 [2024-11-18 03:08:00.953195] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:57.571 [2024-11-18 03:08:00.953246] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:57.571 [2024-11-18 03:08:00.953445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.572 03:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.572 03:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:57.572 03:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:57.572 03:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:57.572 03:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:57.572 03:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.572 03:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.572 03:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.572 03:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.572 03:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.572 03:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.572 03:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.572 03:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.572 03:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:57.572 03:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.572 03:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.572 03:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.572 "name": "raid_bdev1", 00:07:57.572 "uuid": "ac22abb7-6a0c-41fa-89f5-166f7e5a51fc", 00:07:57.572 "strip_size_kb": 64, 00:07:57.572 "state": "online", 00:07:57.572 "raid_level": "concat", 00:07:57.572 "superblock": true, 00:07:57.572 "num_base_bdevs": 2, 00:07:57.572 "num_base_bdevs_discovered": 2, 00:07:57.572 "num_base_bdevs_operational": 2, 00:07:57.572 "base_bdevs_list": [ 00:07:57.572 { 00:07:57.572 "name": "BaseBdev1", 00:07:57.572 "uuid": "3b8d83e7-f82f-5661-bc4e-812b5eb9f455", 00:07:57.572 "is_configured": true, 00:07:57.572 "data_offset": 2048, 00:07:57.572 "data_size": 63488 00:07:57.572 }, 00:07:57.572 { 00:07:57.572 "name": "BaseBdev2", 00:07:57.572 "uuid": "e8f832d0-291a-51e6-adcc-24af22047a4a", 00:07:57.572 "is_configured": true, 00:07:57.572 "data_offset": 2048, 00:07:57.572 "data_size": 63488 00:07:57.572 } 00:07:57.572 ] 00:07:57.572 }' 00:07:57.572 03:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.572 03:08:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.830 03:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:57.830 03:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:58.089 [2024-11-18 03:08:01.441714] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:59.025 03:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:59.025 03:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.025 03:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.025 03:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.025 03:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:59.025 03:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:59.025 03:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:59.025 03:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:59.025 03:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:59.025 03:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:59.025 03:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:59.025 03:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.025 03:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.025 03:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.025 03:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.025 03:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.025 03:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.025 03:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.026 03:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:59.026 03:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.026 03:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.026 03:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.026 03:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.026 "name": "raid_bdev1", 00:07:59.026 "uuid": "ac22abb7-6a0c-41fa-89f5-166f7e5a51fc", 00:07:59.026 "strip_size_kb": 64, 00:07:59.026 "state": "online", 00:07:59.026 "raid_level": "concat", 00:07:59.026 "superblock": true, 00:07:59.026 "num_base_bdevs": 2, 00:07:59.026 "num_base_bdevs_discovered": 2, 00:07:59.026 "num_base_bdevs_operational": 2, 00:07:59.026 "base_bdevs_list": [ 00:07:59.026 { 00:07:59.026 "name": "BaseBdev1", 00:07:59.026 "uuid": "3b8d83e7-f82f-5661-bc4e-812b5eb9f455", 00:07:59.026 "is_configured": true, 00:07:59.026 "data_offset": 2048, 00:07:59.026 "data_size": 63488 00:07:59.026 }, 00:07:59.026 { 00:07:59.026 "name": "BaseBdev2", 00:07:59.026 "uuid": "e8f832d0-291a-51e6-adcc-24af22047a4a", 00:07:59.026 "is_configured": true, 00:07:59.026 "data_offset": 2048, 00:07:59.026 "data_size": 63488 00:07:59.026 } 00:07:59.026 ] 00:07:59.026 }' 00:07:59.026 03:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.026 03:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.285 03:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:59.285 03:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.285 03:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.285 [2024-11-18 03:08:02.785815] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:59.285 [2024-11-18 03:08:02.785920] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:59.285 [2024-11-18 03:08:02.788813] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.285 [2024-11-18 03:08:02.788912] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:59.285 [2024-11-18 03:08:02.788994] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:59.285 [2024-11-18 03:08:02.789054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:59.285 { 00:07:59.285 "results": [ 00:07:59.285 { 00:07:59.285 "job": "raid_bdev1", 00:07:59.285 "core_mask": "0x1", 00:07:59.285 "workload": "randrw", 00:07:59.285 "percentage": 50, 00:07:59.285 "status": "finished", 00:07:59.285 "queue_depth": 1, 00:07:59.285 "io_size": 131072, 00:07:59.285 "runtime": 1.344791, 00:07:59.285 "iops": 15931.100074286636, 00:07:59.285 "mibps": 1991.3875092858295, 00:07:59.285 "io_failed": 1, 00:07:59.285 "io_timeout": 0, 00:07:59.285 "avg_latency_us": 86.86789497230615, 00:07:59.285 "min_latency_us": 27.72401746724891, 00:07:59.285 "max_latency_us": 1445.2262008733624 00:07:59.285 } 00:07:59.285 ], 00:07:59.285 "core_count": 1 00:07:59.285 } 00:07:59.285 03:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.285 03:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73866 00:07:59.285 03:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 73866 ']' 00:07:59.285 03:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 73866 00:07:59.285 03:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:59.285 03:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:59.285 03:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73866 00:07:59.285 03:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:59.285 03:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:59.285 03:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73866' 00:07:59.285 killing process with pid 73866 00:07:59.285 03:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 73866 00:07:59.285 03:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 73866 00:07:59.285 [2024-11-18 03:08:02.828548] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:59.285 [2024-11-18 03:08:02.844734] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:59.545 03:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:59.545 03:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.yU6yt1x4cQ 00:07:59.545 03:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:59.545 03:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:59.545 03:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:59.545 03:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:59.545 03:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:59.545 03:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:59.545 00:07:59.545 real 0m3.201s 00:07:59.545 user 0m4.059s 00:07:59.545 sys 0m0.490s 00:07:59.545 03:08:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.545 03:08:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.545 ************************************ 00:07:59.545 END TEST raid_read_error_test 00:07:59.545 ************************************ 00:07:59.804 03:08:03 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:59.804 03:08:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:59.804 03:08:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.804 03:08:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:59.804 ************************************ 00:07:59.804 START TEST raid_write_error_test 00:07:59.804 ************************************ 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dt2j7FYoPQ 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73996 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73996 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73996 ']' 00:07:59.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:59.804 03:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.804 [2024-11-18 03:08:03.286194] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:59.804 [2024-11-18 03:08:03.286385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73996 ] 00:08:00.064 [2024-11-18 03:08:03.453451] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.064 [2024-11-18 03:08:03.504062] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.064 [2024-11-18 03:08:03.546694] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.064 [2024-11-18 03:08:03.546732] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.631 03:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:00.631 03:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:00.631 03:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:00.631 03:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:00.631 03:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.631 03:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.631 BaseBdev1_malloc 00:08:00.631 03:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.631 03:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:00.631 03:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.631 03:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.631 true 00:08:00.631 03:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.631 03:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:00.631 03:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.631 03:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.631 [2024-11-18 03:08:04.197391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:00.631 [2024-11-18 03:08:04.197448] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:00.631 [2024-11-18 03:08:04.197486] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:00.631 [2024-11-18 03:08:04.197497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:00.631 [2024-11-18 03:08:04.199907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:00.631 [2024-11-18 03:08:04.199951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:00.631 BaseBdev1 00:08:00.631 03:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.631 03:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:00.631 03:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:00.631 03:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.631 03:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.902 BaseBdev2_malloc 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.902 true 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.902 [2024-11-18 03:08:04.249224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:00.902 [2024-11-18 03:08:04.249283] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:00.902 [2024-11-18 03:08:04.249304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:00.902 [2024-11-18 03:08:04.249313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:00.902 [2024-11-18 03:08:04.251583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:00.902 [2024-11-18 03:08:04.251698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:00.902 BaseBdev2 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.902 [2024-11-18 03:08:04.261252] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:00.902 [2024-11-18 03:08:04.263261] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:00.902 [2024-11-18 03:08:04.263451] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:00.902 [2024-11-18 03:08:04.263466] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:00.902 [2024-11-18 03:08:04.263750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:00.902 [2024-11-18 03:08:04.263888] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:00.902 [2024-11-18 03:08:04.263901] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:00.902 [2024-11-18 03:08:04.264059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.902 "name": "raid_bdev1", 00:08:00.902 "uuid": "dfa901cb-068a-4bf1-8de8-16e783b72dc4", 00:08:00.902 "strip_size_kb": 64, 00:08:00.902 "state": "online", 00:08:00.902 "raid_level": "concat", 00:08:00.902 "superblock": true, 00:08:00.902 "num_base_bdevs": 2, 00:08:00.902 "num_base_bdevs_discovered": 2, 00:08:00.902 "num_base_bdevs_operational": 2, 00:08:00.902 "base_bdevs_list": [ 00:08:00.902 { 00:08:00.902 "name": "BaseBdev1", 00:08:00.902 "uuid": "4ffcae04-ba01-5755-affe-8ff919dd13e0", 00:08:00.902 "is_configured": true, 00:08:00.902 "data_offset": 2048, 00:08:00.902 "data_size": 63488 00:08:00.902 }, 00:08:00.902 { 00:08:00.902 "name": "BaseBdev2", 00:08:00.902 "uuid": "77a36fdd-af81-5406-bd55-09ba6758a7a7", 00:08:00.902 "is_configured": true, 00:08:00.902 "data_offset": 2048, 00:08:00.902 "data_size": 63488 00:08:00.902 } 00:08:00.902 ] 00:08:00.902 }' 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.902 03:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.160 03:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:01.160 03:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:01.419 [2024-11-18 03:08:04.772784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:02.384 03:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:02.384 03:08:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.384 03:08:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.384 03:08:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.384 03:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:02.384 03:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:02.384 03:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:02.384 03:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:02.384 03:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.384 03:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.384 03:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:02.384 03:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.384 03:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.384 03:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.384 03:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.384 03:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.384 03:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.384 03:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.384 03:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.384 03:08:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.385 03:08:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.385 03:08:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.385 03:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.385 "name": "raid_bdev1", 00:08:02.385 "uuid": "dfa901cb-068a-4bf1-8de8-16e783b72dc4", 00:08:02.385 "strip_size_kb": 64, 00:08:02.385 "state": "online", 00:08:02.385 "raid_level": "concat", 00:08:02.385 "superblock": true, 00:08:02.385 "num_base_bdevs": 2, 00:08:02.385 "num_base_bdevs_discovered": 2, 00:08:02.385 "num_base_bdevs_operational": 2, 00:08:02.385 "base_bdevs_list": [ 00:08:02.385 { 00:08:02.385 "name": "BaseBdev1", 00:08:02.385 "uuid": "4ffcae04-ba01-5755-affe-8ff919dd13e0", 00:08:02.385 "is_configured": true, 00:08:02.385 "data_offset": 2048, 00:08:02.385 "data_size": 63488 00:08:02.385 }, 00:08:02.385 { 00:08:02.385 "name": "BaseBdev2", 00:08:02.385 "uuid": "77a36fdd-af81-5406-bd55-09ba6758a7a7", 00:08:02.385 "is_configured": true, 00:08:02.385 "data_offset": 2048, 00:08:02.385 "data_size": 63488 00:08:02.385 } 00:08:02.385 ] 00:08:02.385 }' 00:08:02.385 03:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.385 03:08:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.644 03:08:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:02.644 03:08:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.644 03:08:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.644 [2024-11-18 03:08:06.121147] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:02.644 [2024-11-18 03:08:06.121249] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:02.644 [2024-11-18 03:08:06.124138] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:02.644 [2024-11-18 03:08:06.124227] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.644 [2024-11-18 03:08:06.124293] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:02.644 [2024-11-18 03:08:06.124341] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:02.644 03:08:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.644 { 00:08:02.644 "results": [ 00:08:02.644 { 00:08:02.644 "job": "raid_bdev1", 00:08:02.644 "core_mask": "0x1", 00:08:02.644 "workload": "randrw", 00:08:02.644 "percentage": 50, 00:08:02.644 "status": "finished", 00:08:02.644 "queue_depth": 1, 00:08:02.644 "io_size": 131072, 00:08:02.644 "runtime": 1.349011, 00:08:02.644 "iops": 15965.770479262215, 00:08:02.644 "mibps": 1995.721309907777, 00:08:02.644 "io_failed": 1, 00:08:02.644 "io_timeout": 0, 00:08:02.644 "avg_latency_us": 86.62413807714694, 00:08:02.644 "min_latency_us": 26.606113537117903, 00:08:02.644 "max_latency_us": 1609.7816593886462 00:08:02.644 } 00:08:02.644 ], 00:08:02.644 "core_count": 1 00:08:02.644 } 00:08:02.644 03:08:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73996 00:08:02.644 03:08:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73996 ']' 00:08:02.644 03:08:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73996 00:08:02.644 03:08:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:02.644 03:08:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:02.644 03:08:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73996 00:08:02.644 killing process with pid 73996 00:08:02.644 03:08:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:02.644 03:08:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:02.644 03:08:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73996' 00:08:02.644 03:08:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73996 00:08:02.644 [2024-11-18 03:08:06.165457] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:02.644 03:08:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73996 00:08:02.644 [2024-11-18 03:08:06.181405] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:02.903 03:08:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dt2j7FYoPQ 00:08:02.903 03:08:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:02.903 03:08:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:02.903 03:08:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:02.903 03:08:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:02.903 ************************************ 00:08:02.903 END TEST raid_write_error_test 00:08:02.903 ************************************ 00:08:02.903 03:08:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:02.903 03:08:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:02.903 03:08:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:02.903 00:08:02.903 real 0m3.269s 00:08:02.903 user 0m4.139s 00:08:02.903 sys 0m0.542s 00:08:02.903 03:08:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.903 03:08:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.903 03:08:06 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:02.903 03:08:06 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:02.903 03:08:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:02.903 03:08:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.163 03:08:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:03.163 ************************************ 00:08:03.163 START TEST raid_state_function_test 00:08:03.163 ************************************ 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74123 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74123' 00:08:03.163 Process raid pid: 74123 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74123 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 74123 ']' 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:03.163 03:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.163 [2024-11-18 03:08:06.579088] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:03.163 [2024-11-18 03:08:06.579337] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.422 [2024-11-18 03:08:06.738812] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.422 [2024-11-18 03:08:06.788797] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.422 [2024-11-18 03:08:06.831819] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.422 [2024-11-18 03:08:06.831935] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.989 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:03.989 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:03.989 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:03.989 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.989 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.989 [2024-11-18 03:08:07.473281] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:03.989 [2024-11-18 03:08:07.473394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:03.989 [2024-11-18 03:08:07.473427] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:03.989 [2024-11-18 03:08:07.473452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:03.989 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.989 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:03.989 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.989 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.989 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:03.989 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:03.989 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.989 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.989 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.989 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.989 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.989 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.989 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.989 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.989 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.989 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.989 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.989 "name": "Existed_Raid", 00:08:03.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.990 "strip_size_kb": 0, 00:08:03.990 "state": "configuring", 00:08:03.990 "raid_level": "raid1", 00:08:03.990 "superblock": false, 00:08:03.990 "num_base_bdevs": 2, 00:08:03.990 "num_base_bdevs_discovered": 0, 00:08:03.990 "num_base_bdevs_operational": 2, 00:08:03.990 "base_bdevs_list": [ 00:08:03.990 { 00:08:03.990 "name": "BaseBdev1", 00:08:03.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.990 "is_configured": false, 00:08:03.990 "data_offset": 0, 00:08:03.990 "data_size": 0 00:08:03.990 }, 00:08:03.990 { 00:08:03.990 "name": "BaseBdev2", 00:08:03.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.990 "is_configured": false, 00:08:03.990 "data_offset": 0, 00:08:03.990 "data_size": 0 00:08:03.990 } 00:08:03.990 ] 00:08:03.990 }' 00:08:03.990 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.990 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.556 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:04.556 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.556 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.556 [2024-11-18 03:08:07.884487] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:04.556 [2024-11-18 03:08:07.884536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:04.556 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.556 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:04.556 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.556 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.556 [2024-11-18 03:08:07.896500] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:04.556 [2024-11-18 03:08:07.896548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:04.556 [2024-11-18 03:08:07.896557] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:04.556 [2024-11-18 03:08:07.896566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:04.556 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.556 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:04.556 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.556 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.556 [2024-11-18 03:08:07.917493] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:04.556 BaseBdev1 00:08:04.556 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.556 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:04.556 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:04.556 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:04.556 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:04.556 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:04.556 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:04.556 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:04.556 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.556 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.556 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.556 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:04.556 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.556 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.556 [ 00:08:04.556 { 00:08:04.556 "name": "BaseBdev1", 00:08:04.556 "aliases": [ 00:08:04.556 "0e94dba8-f138-444e-b801-34f1172ba90b" 00:08:04.556 ], 00:08:04.556 "product_name": "Malloc disk", 00:08:04.556 "block_size": 512, 00:08:04.556 "num_blocks": 65536, 00:08:04.556 "uuid": "0e94dba8-f138-444e-b801-34f1172ba90b", 00:08:04.556 "assigned_rate_limits": { 00:08:04.556 "rw_ios_per_sec": 0, 00:08:04.556 "rw_mbytes_per_sec": 0, 00:08:04.556 "r_mbytes_per_sec": 0, 00:08:04.556 "w_mbytes_per_sec": 0 00:08:04.556 }, 00:08:04.556 "claimed": true, 00:08:04.556 "claim_type": "exclusive_write", 00:08:04.556 "zoned": false, 00:08:04.556 "supported_io_types": { 00:08:04.556 "read": true, 00:08:04.556 "write": true, 00:08:04.556 "unmap": true, 00:08:04.556 "flush": true, 00:08:04.556 "reset": true, 00:08:04.556 "nvme_admin": false, 00:08:04.556 "nvme_io": false, 00:08:04.556 "nvme_io_md": false, 00:08:04.556 "write_zeroes": true, 00:08:04.556 "zcopy": true, 00:08:04.556 "get_zone_info": false, 00:08:04.556 "zone_management": false, 00:08:04.556 "zone_append": false, 00:08:04.556 "compare": false, 00:08:04.556 "compare_and_write": false, 00:08:04.556 "abort": true, 00:08:04.556 "seek_hole": false, 00:08:04.556 "seek_data": false, 00:08:04.556 "copy": true, 00:08:04.556 "nvme_iov_md": false 00:08:04.556 }, 00:08:04.556 "memory_domains": [ 00:08:04.556 { 00:08:04.556 "dma_device_id": "system", 00:08:04.556 "dma_device_type": 1 00:08:04.556 }, 00:08:04.556 { 00:08:04.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.556 "dma_device_type": 2 00:08:04.556 } 00:08:04.556 ], 00:08:04.556 "driver_specific": {} 00:08:04.556 } 00:08:04.556 ] 00:08:04.556 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.556 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:04.556 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:04.557 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.557 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.557 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:04.557 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:04.557 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.557 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.557 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.557 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.557 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.557 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.557 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.557 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.557 03:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.557 03:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.557 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.557 "name": "Existed_Raid", 00:08:04.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.557 "strip_size_kb": 0, 00:08:04.557 "state": "configuring", 00:08:04.557 "raid_level": "raid1", 00:08:04.557 "superblock": false, 00:08:04.557 "num_base_bdevs": 2, 00:08:04.557 "num_base_bdevs_discovered": 1, 00:08:04.557 "num_base_bdevs_operational": 2, 00:08:04.557 "base_bdevs_list": [ 00:08:04.557 { 00:08:04.557 "name": "BaseBdev1", 00:08:04.557 "uuid": "0e94dba8-f138-444e-b801-34f1172ba90b", 00:08:04.557 "is_configured": true, 00:08:04.557 "data_offset": 0, 00:08:04.557 "data_size": 65536 00:08:04.557 }, 00:08:04.557 { 00:08:04.557 "name": "BaseBdev2", 00:08:04.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.557 "is_configured": false, 00:08:04.557 "data_offset": 0, 00:08:04.557 "data_size": 0 00:08:04.557 } 00:08:04.557 ] 00:08:04.557 }' 00:08:04.557 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.557 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.815 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:04.815 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.815 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.815 [2024-11-18 03:08:08.388776] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:04.815 [2024-11-18 03:08:08.388890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:05.073 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.073 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:05.073 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.073 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.073 [2024-11-18 03:08:08.400785] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:05.073 [2024-11-18 03:08:08.402868] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.073 [2024-11-18 03:08:08.402988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.073 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.073 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:05.073 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:05.073 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:05.073 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.073 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.073 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:05.073 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:05.073 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:05.073 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.073 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.073 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.073 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.073 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.074 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.074 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.074 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.074 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.074 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.074 "name": "Existed_Raid", 00:08:05.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.074 "strip_size_kb": 0, 00:08:05.074 "state": "configuring", 00:08:05.074 "raid_level": "raid1", 00:08:05.074 "superblock": false, 00:08:05.074 "num_base_bdevs": 2, 00:08:05.074 "num_base_bdevs_discovered": 1, 00:08:05.074 "num_base_bdevs_operational": 2, 00:08:05.074 "base_bdevs_list": [ 00:08:05.074 { 00:08:05.074 "name": "BaseBdev1", 00:08:05.074 "uuid": "0e94dba8-f138-444e-b801-34f1172ba90b", 00:08:05.074 "is_configured": true, 00:08:05.074 "data_offset": 0, 00:08:05.074 "data_size": 65536 00:08:05.074 }, 00:08:05.074 { 00:08:05.074 "name": "BaseBdev2", 00:08:05.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.074 "is_configured": false, 00:08:05.074 "data_offset": 0, 00:08:05.074 "data_size": 0 00:08:05.074 } 00:08:05.074 ] 00:08:05.074 }' 00:08:05.074 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.074 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.333 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:05.333 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.333 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.333 [2024-11-18 03:08:08.824677] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:05.333 [2024-11-18 03:08:08.824734] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:05.333 [2024-11-18 03:08:08.824757] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:05.333 [2024-11-18 03:08:08.825112] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:05.333 [2024-11-18 03:08:08.825289] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:05.333 [2024-11-18 03:08:08.825308] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:05.334 [2024-11-18 03:08:08.825550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.334 BaseBdev2 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.334 [ 00:08:05.334 { 00:08:05.334 "name": "BaseBdev2", 00:08:05.334 "aliases": [ 00:08:05.334 "f9a38427-aa42-40b7-928c-6545aad6190a" 00:08:05.334 ], 00:08:05.334 "product_name": "Malloc disk", 00:08:05.334 "block_size": 512, 00:08:05.334 "num_blocks": 65536, 00:08:05.334 "uuid": "f9a38427-aa42-40b7-928c-6545aad6190a", 00:08:05.334 "assigned_rate_limits": { 00:08:05.334 "rw_ios_per_sec": 0, 00:08:05.334 "rw_mbytes_per_sec": 0, 00:08:05.334 "r_mbytes_per_sec": 0, 00:08:05.334 "w_mbytes_per_sec": 0 00:08:05.334 }, 00:08:05.334 "claimed": true, 00:08:05.334 "claim_type": "exclusive_write", 00:08:05.334 "zoned": false, 00:08:05.334 "supported_io_types": { 00:08:05.334 "read": true, 00:08:05.334 "write": true, 00:08:05.334 "unmap": true, 00:08:05.334 "flush": true, 00:08:05.334 "reset": true, 00:08:05.334 "nvme_admin": false, 00:08:05.334 "nvme_io": false, 00:08:05.334 "nvme_io_md": false, 00:08:05.334 "write_zeroes": true, 00:08:05.334 "zcopy": true, 00:08:05.334 "get_zone_info": false, 00:08:05.334 "zone_management": false, 00:08:05.334 "zone_append": false, 00:08:05.334 "compare": false, 00:08:05.334 "compare_and_write": false, 00:08:05.334 "abort": true, 00:08:05.334 "seek_hole": false, 00:08:05.334 "seek_data": false, 00:08:05.334 "copy": true, 00:08:05.334 "nvme_iov_md": false 00:08:05.334 }, 00:08:05.334 "memory_domains": [ 00:08:05.334 { 00:08:05.334 "dma_device_id": "system", 00:08:05.334 "dma_device_type": 1 00:08:05.334 }, 00:08:05.334 { 00:08:05.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.334 "dma_device_type": 2 00:08:05.334 } 00:08:05.334 ], 00:08:05.334 "driver_specific": {} 00:08:05.334 } 00:08:05.334 ] 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.334 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.593 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.593 "name": "Existed_Raid", 00:08:05.593 "uuid": "5efdc5bd-5003-45b6-ae5e-e3830123ff9c", 00:08:05.593 "strip_size_kb": 0, 00:08:05.593 "state": "online", 00:08:05.593 "raid_level": "raid1", 00:08:05.593 "superblock": false, 00:08:05.593 "num_base_bdevs": 2, 00:08:05.593 "num_base_bdevs_discovered": 2, 00:08:05.593 "num_base_bdevs_operational": 2, 00:08:05.593 "base_bdevs_list": [ 00:08:05.593 { 00:08:05.593 "name": "BaseBdev1", 00:08:05.593 "uuid": "0e94dba8-f138-444e-b801-34f1172ba90b", 00:08:05.593 "is_configured": true, 00:08:05.593 "data_offset": 0, 00:08:05.593 "data_size": 65536 00:08:05.593 }, 00:08:05.593 { 00:08:05.593 "name": "BaseBdev2", 00:08:05.593 "uuid": "f9a38427-aa42-40b7-928c-6545aad6190a", 00:08:05.593 "is_configured": true, 00:08:05.593 "data_offset": 0, 00:08:05.593 "data_size": 65536 00:08:05.593 } 00:08:05.593 ] 00:08:05.593 }' 00:08:05.593 03:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.593 03:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.851 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:05.851 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:05.851 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:05.851 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:05.851 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:05.851 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:05.851 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:05.851 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:05.851 03:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.851 03:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.851 [2024-11-18 03:08:09.324270] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:05.851 03:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.851 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:05.851 "name": "Existed_Raid", 00:08:05.851 "aliases": [ 00:08:05.851 "5efdc5bd-5003-45b6-ae5e-e3830123ff9c" 00:08:05.851 ], 00:08:05.851 "product_name": "Raid Volume", 00:08:05.851 "block_size": 512, 00:08:05.851 "num_blocks": 65536, 00:08:05.851 "uuid": "5efdc5bd-5003-45b6-ae5e-e3830123ff9c", 00:08:05.851 "assigned_rate_limits": { 00:08:05.851 "rw_ios_per_sec": 0, 00:08:05.851 "rw_mbytes_per_sec": 0, 00:08:05.851 "r_mbytes_per_sec": 0, 00:08:05.851 "w_mbytes_per_sec": 0 00:08:05.851 }, 00:08:05.851 "claimed": false, 00:08:05.851 "zoned": false, 00:08:05.851 "supported_io_types": { 00:08:05.851 "read": true, 00:08:05.851 "write": true, 00:08:05.851 "unmap": false, 00:08:05.851 "flush": false, 00:08:05.851 "reset": true, 00:08:05.851 "nvme_admin": false, 00:08:05.851 "nvme_io": false, 00:08:05.851 "nvme_io_md": false, 00:08:05.851 "write_zeroes": true, 00:08:05.851 "zcopy": false, 00:08:05.851 "get_zone_info": false, 00:08:05.851 "zone_management": false, 00:08:05.851 "zone_append": false, 00:08:05.851 "compare": false, 00:08:05.851 "compare_and_write": false, 00:08:05.851 "abort": false, 00:08:05.851 "seek_hole": false, 00:08:05.851 "seek_data": false, 00:08:05.851 "copy": false, 00:08:05.851 "nvme_iov_md": false 00:08:05.851 }, 00:08:05.851 "memory_domains": [ 00:08:05.851 { 00:08:05.851 "dma_device_id": "system", 00:08:05.851 "dma_device_type": 1 00:08:05.851 }, 00:08:05.851 { 00:08:05.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.851 "dma_device_type": 2 00:08:05.851 }, 00:08:05.851 { 00:08:05.851 "dma_device_id": "system", 00:08:05.851 "dma_device_type": 1 00:08:05.851 }, 00:08:05.851 { 00:08:05.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.851 "dma_device_type": 2 00:08:05.851 } 00:08:05.851 ], 00:08:05.851 "driver_specific": { 00:08:05.851 "raid": { 00:08:05.851 "uuid": "5efdc5bd-5003-45b6-ae5e-e3830123ff9c", 00:08:05.851 "strip_size_kb": 0, 00:08:05.851 "state": "online", 00:08:05.851 "raid_level": "raid1", 00:08:05.851 "superblock": false, 00:08:05.851 "num_base_bdevs": 2, 00:08:05.851 "num_base_bdevs_discovered": 2, 00:08:05.851 "num_base_bdevs_operational": 2, 00:08:05.851 "base_bdevs_list": [ 00:08:05.851 { 00:08:05.851 "name": "BaseBdev1", 00:08:05.851 "uuid": "0e94dba8-f138-444e-b801-34f1172ba90b", 00:08:05.851 "is_configured": true, 00:08:05.851 "data_offset": 0, 00:08:05.851 "data_size": 65536 00:08:05.851 }, 00:08:05.851 { 00:08:05.851 "name": "BaseBdev2", 00:08:05.851 "uuid": "f9a38427-aa42-40b7-928c-6545aad6190a", 00:08:05.851 "is_configured": true, 00:08:05.851 "data_offset": 0, 00:08:05.851 "data_size": 65536 00:08:05.851 } 00:08:05.851 ] 00:08:05.851 } 00:08:05.851 } 00:08:05.851 }' 00:08:05.851 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:05.851 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:05.851 BaseBdev2' 00:08:05.851 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.110 [2024-11-18 03:08:09.547581] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.110 "name": "Existed_Raid", 00:08:06.110 "uuid": "5efdc5bd-5003-45b6-ae5e-e3830123ff9c", 00:08:06.110 "strip_size_kb": 0, 00:08:06.110 "state": "online", 00:08:06.110 "raid_level": "raid1", 00:08:06.110 "superblock": false, 00:08:06.110 "num_base_bdevs": 2, 00:08:06.110 "num_base_bdevs_discovered": 1, 00:08:06.110 "num_base_bdevs_operational": 1, 00:08:06.110 "base_bdevs_list": [ 00:08:06.110 { 00:08:06.110 "name": null, 00:08:06.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.110 "is_configured": false, 00:08:06.110 "data_offset": 0, 00:08:06.110 "data_size": 65536 00:08:06.110 }, 00:08:06.110 { 00:08:06.110 "name": "BaseBdev2", 00:08:06.110 "uuid": "f9a38427-aa42-40b7-928c-6545aad6190a", 00:08:06.110 "is_configured": true, 00:08:06.110 "data_offset": 0, 00:08:06.110 "data_size": 65536 00:08:06.110 } 00:08:06.110 ] 00:08:06.110 }' 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.110 03:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.678 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:06.678 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:06.678 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.678 03:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.678 03:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.678 03:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:06.678 03:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.678 03:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:06.678 03:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:06.678 03:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:06.678 03:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.678 03:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.678 [2024-11-18 03:08:10.018326] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:06.678 [2024-11-18 03:08:10.018480] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:06.678 [2024-11-18 03:08:10.030263] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:06.678 [2024-11-18 03:08:10.030392] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:06.678 [2024-11-18 03:08:10.030427] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:06.678 03:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.678 03:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:06.678 03:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:06.678 03:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.678 03:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:06.678 03:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.678 03:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.678 03:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.678 03:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:06.678 03:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:06.678 03:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:06.678 03:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74123 00:08:06.678 03:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 74123 ']' 00:08:06.678 03:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 74123 00:08:06.678 03:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:06.678 03:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:06.678 03:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74123 00:08:06.678 killing process with pid 74123 00:08:06.678 03:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:06.678 03:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:06.678 03:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74123' 00:08:06.678 03:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 74123 00:08:06.678 [2024-11-18 03:08:10.116807] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:06.679 03:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 74123 00:08:06.679 [2024-11-18 03:08:10.117870] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:06.937 ************************************ 00:08:06.937 END TEST raid_state_function_test 00:08:06.937 ************************************ 00:08:06.937 03:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:06.937 00:08:06.937 real 0m3.874s 00:08:06.937 user 0m6.076s 00:08:06.937 sys 0m0.785s 00:08:06.937 03:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:06.937 03:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.937 03:08:10 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:06.937 03:08:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:06.937 03:08:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:06.937 03:08:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:06.937 ************************************ 00:08:06.937 START TEST raid_state_function_test_sb 00:08:06.937 ************************************ 00:08:06.937 03:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:08:06.937 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:06.937 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:06.937 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:06.937 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:06.937 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:06.937 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.938 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:06.938 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:06.938 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.938 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:06.938 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:06.938 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.938 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:06.938 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:06.938 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:06.938 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:06.938 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:06.938 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:06.938 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:06.938 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:06.938 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:06.938 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:06.938 Process raid pid: 74360 00:08:06.938 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74360 00:08:06.938 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:06.938 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74360' 00:08:06.938 03:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74360 00:08:06.938 03:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 74360 ']' 00:08:06.938 03:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.938 03:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:06.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.938 03:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.938 03:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:06.938 03:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.196 [2024-11-18 03:08:10.528649] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:07.196 [2024-11-18 03:08:10.528791] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.196 [2024-11-18 03:08:10.672547] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.196 [2024-11-18 03:08:10.722442] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.196 [2024-11-18 03:08:10.765253] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.196 [2024-11-18 03:08:10.765289] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.132 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:08.132 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:08.132 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:08.132 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.132 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.133 [2024-11-18 03:08:11.382951] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:08.133 [2024-11-18 03:08:11.383097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:08.133 [2024-11-18 03:08:11.383114] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.133 [2024-11-18 03:08:11.383125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.133 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.133 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:08.133 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.133 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.133 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.133 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.133 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.133 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.133 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.133 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.133 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.133 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.133 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.133 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.133 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.133 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.133 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.133 "name": "Existed_Raid", 00:08:08.133 "uuid": "edfe4cfd-3915-4fa9-9c5e-e5dd4272f7e8", 00:08:08.133 "strip_size_kb": 0, 00:08:08.133 "state": "configuring", 00:08:08.133 "raid_level": "raid1", 00:08:08.133 "superblock": true, 00:08:08.133 "num_base_bdevs": 2, 00:08:08.133 "num_base_bdevs_discovered": 0, 00:08:08.133 "num_base_bdevs_operational": 2, 00:08:08.133 "base_bdevs_list": [ 00:08:08.133 { 00:08:08.133 "name": "BaseBdev1", 00:08:08.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.133 "is_configured": false, 00:08:08.133 "data_offset": 0, 00:08:08.133 "data_size": 0 00:08:08.133 }, 00:08:08.133 { 00:08:08.133 "name": "BaseBdev2", 00:08:08.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.133 "is_configured": false, 00:08:08.133 "data_offset": 0, 00:08:08.133 "data_size": 0 00:08:08.133 } 00:08:08.133 ] 00:08:08.133 }' 00:08:08.133 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.133 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.392 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:08.392 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.392 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.392 [2024-11-18 03:08:11.818125] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:08.392 [2024-11-18 03:08:11.818240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:08.392 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.392 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:08.392 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.392 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.392 [2024-11-18 03:08:11.830144] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:08.392 [2024-11-18 03:08:11.830238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:08.392 [2024-11-18 03:08:11.830281] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.392 [2024-11-18 03:08:11.830306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.392 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.392 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:08.392 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.392 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.392 [2024-11-18 03:08:11.851016] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.392 BaseBdev1 00:08:08.392 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.392 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.393 [ 00:08:08.393 { 00:08:08.393 "name": "BaseBdev1", 00:08:08.393 "aliases": [ 00:08:08.393 "eb2756fc-cece-4542-97b6-49da5b9f19ec" 00:08:08.393 ], 00:08:08.393 "product_name": "Malloc disk", 00:08:08.393 "block_size": 512, 00:08:08.393 "num_blocks": 65536, 00:08:08.393 "uuid": "eb2756fc-cece-4542-97b6-49da5b9f19ec", 00:08:08.393 "assigned_rate_limits": { 00:08:08.393 "rw_ios_per_sec": 0, 00:08:08.393 "rw_mbytes_per_sec": 0, 00:08:08.393 "r_mbytes_per_sec": 0, 00:08:08.393 "w_mbytes_per_sec": 0 00:08:08.393 }, 00:08:08.393 "claimed": true, 00:08:08.393 "claim_type": "exclusive_write", 00:08:08.393 "zoned": false, 00:08:08.393 "supported_io_types": { 00:08:08.393 "read": true, 00:08:08.393 "write": true, 00:08:08.393 "unmap": true, 00:08:08.393 "flush": true, 00:08:08.393 "reset": true, 00:08:08.393 "nvme_admin": false, 00:08:08.393 "nvme_io": false, 00:08:08.393 "nvme_io_md": false, 00:08:08.393 "write_zeroes": true, 00:08:08.393 "zcopy": true, 00:08:08.393 "get_zone_info": false, 00:08:08.393 "zone_management": false, 00:08:08.393 "zone_append": false, 00:08:08.393 "compare": false, 00:08:08.393 "compare_and_write": false, 00:08:08.393 "abort": true, 00:08:08.393 "seek_hole": false, 00:08:08.393 "seek_data": false, 00:08:08.393 "copy": true, 00:08:08.393 "nvme_iov_md": false 00:08:08.393 }, 00:08:08.393 "memory_domains": [ 00:08:08.393 { 00:08:08.393 "dma_device_id": "system", 00:08:08.393 "dma_device_type": 1 00:08:08.393 }, 00:08:08.393 { 00:08:08.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.393 "dma_device_type": 2 00:08:08.393 } 00:08:08.393 ], 00:08:08.393 "driver_specific": {} 00:08:08.393 } 00:08:08.393 ] 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.393 "name": "Existed_Raid", 00:08:08.393 "uuid": "58971a7d-6bb4-4120-8a0c-a5555be5f9cc", 00:08:08.393 "strip_size_kb": 0, 00:08:08.393 "state": "configuring", 00:08:08.393 "raid_level": "raid1", 00:08:08.393 "superblock": true, 00:08:08.393 "num_base_bdevs": 2, 00:08:08.393 "num_base_bdevs_discovered": 1, 00:08:08.393 "num_base_bdevs_operational": 2, 00:08:08.393 "base_bdevs_list": [ 00:08:08.393 { 00:08:08.393 "name": "BaseBdev1", 00:08:08.393 "uuid": "eb2756fc-cece-4542-97b6-49da5b9f19ec", 00:08:08.393 "is_configured": true, 00:08:08.393 "data_offset": 2048, 00:08:08.393 "data_size": 63488 00:08:08.393 }, 00:08:08.393 { 00:08:08.393 "name": "BaseBdev2", 00:08:08.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.393 "is_configured": false, 00:08:08.393 "data_offset": 0, 00:08:08.393 "data_size": 0 00:08:08.393 } 00:08:08.393 ] 00:08:08.393 }' 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.393 03:08:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.961 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:08.961 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.961 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.961 [2024-11-18 03:08:12.334271] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:08.961 [2024-11-18 03:08:12.334336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:08.961 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.961 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:08.961 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.961 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.961 [2024-11-18 03:08:12.342271] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.961 [2024-11-18 03:08:12.344157] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.961 [2024-11-18 03:08:12.344219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.961 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.961 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:08.961 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:08.961 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:08.961 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.961 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.961 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.961 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.961 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.961 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.961 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.961 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.961 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.961 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.961 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.961 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.961 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.961 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.961 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.961 "name": "Existed_Raid", 00:08:08.961 "uuid": "499de61e-5e40-4596-892c-4a4f0b63c7d7", 00:08:08.961 "strip_size_kb": 0, 00:08:08.961 "state": "configuring", 00:08:08.961 "raid_level": "raid1", 00:08:08.961 "superblock": true, 00:08:08.961 "num_base_bdevs": 2, 00:08:08.961 "num_base_bdevs_discovered": 1, 00:08:08.961 "num_base_bdevs_operational": 2, 00:08:08.961 "base_bdevs_list": [ 00:08:08.961 { 00:08:08.961 "name": "BaseBdev1", 00:08:08.961 "uuid": "eb2756fc-cece-4542-97b6-49da5b9f19ec", 00:08:08.961 "is_configured": true, 00:08:08.961 "data_offset": 2048, 00:08:08.961 "data_size": 63488 00:08:08.961 }, 00:08:08.961 { 00:08:08.961 "name": "BaseBdev2", 00:08:08.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.961 "is_configured": false, 00:08:08.961 "data_offset": 0, 00:08:08.961 "data_size": 0 00:08:08.961 } 00:08:08.961 ] 00:08:08.961 }' 00:08:08.961 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.961 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.526 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:09.526 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.526 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.526 [2024-11-18 03:08:12.817565] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:09.526 [2024-11-18 03:08:12.817912] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:09.526 [2024-11-18 03:08:12.817999] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:09.526 [2024-11-18 03:08:12.818372] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:09.526 BaseBdev2 00:08:09.526 [2024-11-18 03:08:12.818579] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:09.526 [2024-11-18 03:08:12.818606] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:09.526 [2024-11-18 03:08:12.818763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.526 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.526 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:09.526 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:09.526 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:09.526 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:09.526 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:09.526 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:09.526 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:09.526 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.526 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.526 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.526 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:09.526 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.526 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.526 [ 00:08:09.526 { 00:08:09.526 "name": "BaseBdev2", 00:08:09.526 "aliases": [ 00:08:09.526 "8eb6a76b-7eb3-4616-ade7-dd40e0c8a867" 00:08:09.526 ], 00:08:09.526 "product_name": "Malloc disk", 00:08:09.526 "block_size": 512, 00:08:09.526 "num_blocks": 65536, 00:08:09.526 "uuid": "8eb6a76b-7eb3-4616-ade7-dd40e0c8a867", 00:08:09.526 "assigned_rate_limits": { 00:08:09.526 "rw_ios_per_sec": 0, 00:08:09.526 "rw_mbytes_per_sec": 0, 00:08:09.526 "r_mbytes_per_sec": 0, 00:08:09.526 "w_mbytes_per_sec": 0 00:08:09.526 }, 00:08:09.526 "claimed": true, 00:08:09.526 "claim_type": "exclusive_write", 00:08:09.526 "zoned": false, 00:08:09.526 "supported_io_types": { 00:08:09.526 "read": true, 00:08:09.526 "write": true, 00:08:09.526 "unmap": true, 00:08:09.526 "flush": true, 00:08:09.526 "reset": true, 00:08:09.526 "nvme_admin": false, 00:08:09.526 "nvme_io": false, 00:08:09.526 "nvme_io_md": false, 00:08:09.526 "write_zeroes": true, 00:08:09.526 "zcopy": true, 00:08:09.526 "get_zone_info": false, 00:08:09.526 "zone_management": false, 00:08:09.526 "zone_append": false, 00:08:09.526 "compare": false, 00:08:09.526 "compare_and_write": false, 00:08:09.526 "abort": true, 00:08:09.526 "seek_hole": false, 00:08:09.526 "seek_data": false, 00:08:09.526 "copy": true, 00:08:09.526 "nvme_iov_md": false 00:08:09.526 }, 00:08:09.526 "memory_domains": [ 00:08:09.526 { 00:08:09.527 "dma_device_id": "system", 00:08:09.527 "dma_device_type": 1 00:08:09.527 }, 00:08:09.527 { 00:08:09.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.527 "dma_device_type": 2 00:08:09.527 } 00:08:09.527 ], 00:08:09.527 "driver_specific": {} 00:08:09.527 } 00:08:09.527 ] 00:08:09.527 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.527 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:09.527 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:09.527 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:09.527 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:09.527 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.527 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:09.527 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:09.527 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:09.527 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.527 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.527 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.527 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.527 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.527 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.527 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.527 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.527 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.527 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.527 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.527 "name": "Existed_Raid", 00:08:09.527 "uuid": "499de61e-5e40-4596-892c-4a4f0b63c7d7", 00:08:09.527 "strip_size_kb": 0, 00:08:09.527 "state": "online", 00:08:09.527 "raid_level": "raid1", 00:08:09.527 "superblock": true, 00:08:09.527 "num_base_bdevs": 2, 00:08:09.527 "num_base_bdevs_discovered": 2, 00:08:09.527 "num_base_bdevs_operational": 2, 00:08:09.527 "base_bdevs_list": [ 00:08:09.527 { 00:08:09.527 "name": "BaseBdev1", 00:08:09.527 "uuid": "eb2756fc-cece-4542-97b6-49da5b9f19ec", 00:08:09.527 "is_configured": true, 00:08:09.527 "data_offset": 2048, 00:08:09.527 "data_size": 63488 00:08:09.527 }, 00:08:09.527 { 00:08:09.527 "name": "BaseBdev2", 00:08:09.527 "uuid": "8eb6a76b-7eb3-4616-ade7-dd40e0c8a867", 00:08:09.527 "is_configured": true, 00:08:09.527 "data_offset": 2048, 00:08:09.527 "data_size": 63488 00:08:09.527 } 00:08:09.527 ] 00:08:09.527 }' 00:08:09.527 03:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.527 03:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.795 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:09.795 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:09.795 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:09.795 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:09.795 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:09.795 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:09.795 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:09.795 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:09.795 03:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.795 03:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.795 [2024-11-18 03:08:13.281137] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.795 03:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.795 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:09.795 "name": "Existed_Raid", 00:08:09.795 "aliases": [ 00:08:09.795 "499de61e-5e40-4596-892c-4a4f0b63c7d7" 00:08:09.795 ], 00:08:09.795 "product_name": "Raid Volume", 00:08:09.795 "block_size": 512, 00:08:09.795 "num_blocks": 63488, 00:08:09.795 "uuid": "499de61e-5e40-4596-892c-4a4f0b63c7d7", 00:08:09.795 "assigned_rate_limits": { 00:08:09.795 "rw_ios_per_sec": 0, 00:08:09.795 "rw_mbytes_per_sec": 0, 00:08:09.795 "r_mbytes_per_sec": 0, 00:08:09.795 "w_mbytes_per_sec": 0 00:08:09.795 }, 00:08:09.795 "claimed": false, 00:08:09.795 "zoned": false, 00:08:09.795 "supported_io_types": { 00:08:09.795 "read": true, 00:08:09.795 "write": true, 00:08:09.795 "unmap": false, 00:08:09.795 "flush": false, 00:08:09.795 "reset": true, 00:08:09.795 "nvme_admin": false, 00:08:09.795 "nvme_io": false, 00:08:09.795 "nvme_io_md": false, 00:08:09.795 "write_zeroes": true, 00:08:09.795 "zcopy": false, 00:08:09.795 "get_zone_info": false, 00:08:09.795 "zone_management": false, 00:08:09.795 "zone_append": false, 00:08:09.795 "compare": false, 00:08:09.795 "compare_and_write": false, 00:08:09.795 "abort": false, 00:08:09.796 "seek_hole": false, 00:08:09.796 "seek_data": false, 00:08:09.796 "copy": false, 00:08:09.796 "nvme_iov_md": false 00:08:09.796 }, 00:08:09.796 "memory_domains": [ 00:08:09.796 { 00:08:09.796 "dma_device_id": "system", 00:08:09.796 "dma_device_type": 1 00:08:09.796 }, 00:08:09.796 { 00:08:09.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.796 "dma_device_type": 2 00:08:09.796 }, 00:08:09.796 { 00:08:09.796 "dma_device_id": "system", 00:08:09.796 "dma_device_type": 1 00:08:09.796 }, 00:08:09.796 { 00:08:09.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.796 "dma_device_type": 2 00:08:09.796 } 00:08:09.796 ], 00:08:09.796 "driver_specific": { 00:08:09.796 "raid": { 00:08:09.796 "uuid": "499de61e-5e40-4596-892c-4a4f0b63c7d7", 00:08:09.796 "strip_size_kb": 0, 00:08:09.796 "state": "online", 00:08:09.796 "raid_level": "raid1", 00:08:09.796 "superblock": true, 00:08:09.796 "num_base_bdevs": 2, 00:08:09.796 "num_base_bdevs_discovered": 2, 00:08:09.796 "num_base_bdevs_operational": 2, 00:08:09.796 "base_bdevs_list": [ 00:08:09.796 { 00:08:09.796 "name": "BaseBdev1", 00:08:09.796 "uuid": "eb2756fc-cece-4542-97b6-49da5b9f19ec", 00:08:09.796 "is_configured": true, 00:08:09.796 "data_offset": 2048, 00:08:09.796 "data_size": 63488 00:08:09.796 }, 00:08:09.796 { 00:08:09.796 "name": "BaseBdev2", 00:08:09.796 "uuid": "8eb6a76b-7eb3-4616-ade7-dd40e0c8a867", 00:08:09.796 "is_configured": true, 00:08:09.796 "data_offset": 2048, 00:08:09.796 "data_size": 63488 00:08:09.796 } 00:08:09.796 ] 00:08:09.796 } 00:08:09.796 } 00:08:09.796 }' 00:08:09.796 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:10.072 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:10.072 BaseBdev2' 00:08:10.072 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.072 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:10.072 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:10.072 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.072 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:10.072 03:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.072 03:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.072 03:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.072 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:10.072 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:10.072 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:10.072 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.072 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:10.072 03:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.072 03:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.072 03:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.073 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:10.073 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:10.073 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:10.073 03:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.073 03:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.073 [2024-11-18 03:08:13.512497] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:10.073 03:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.073 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:10.073 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:10.073 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:10.073 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:10.073 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:10.073 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:10.073 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.073 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:10.073 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:10.073 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:10.073 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:10.073 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.073 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.073 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.073 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.073 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.073 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.073 03:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.073 03:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.073 03:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.073 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.073 "name": "Existed_Raid", 00:08:10.073 "uuid": "499de61e-5e40-4596-892c-4a4f0b63c7d7", 00:08:10.073 "strip_size_kb": 0, 00:08:10.073 "state": "online", 00:08:10.073 "raid_level": "raid1", 00:08:10.073 "superblock": true, 00:08:10.073 "num_base_bdevs": 2, 00:08:10.073 "num_base_bdevs_discovered": 1, 00:08:10.073 "num_base_bdevs_operational": 1, 00:08:10.073 "base_bdevs_list": [ 00:08:10.073 { 00:08:10.073 "name": null, 00:08:10.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.073 "is_configured": false, 00:08:10.073 "data_offset": 0, 00:08:10.073 "data_size": 63488 00:08:10.073 }, 00:08:10.073 { 00:08:10.073 "name": "BaseBdev2", 00:08:10.073 "uuid": "8eb6a76b-7eb3-4616-ade7-dd40e0c8a867", 00:08:10.073 "is_configured": true, 00:08:10.073 "data_offset": 2048, 00:08:10.073 "data_size": 63488 00:08:10.073 } 00:08:10.073 ] 00:08:10.073 }' 00:08:10.073 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.073 03:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.640 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:10.640 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:10.640 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.640 03:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.640 03:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:10.640 03:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.640 03:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.640 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:10.640 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:10.640 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:10.640 03:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.640 03:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.640 [2024-11-18 03:08:14.023387] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:10.640 [2024-11-18 03:08:14.023512] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:10.640 [2024-11-18 03:08:14.035646] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:10.640 [2024-11-18 03:08:14.035708] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:10.640 [2024-11-18 03:08:14.035722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:10.640 03:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.640 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:10.640 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:10.640 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.641 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:10.641 03:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.641 03:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.641 03:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.641 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:10.641 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:10.641 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:10.641 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74360 00:08:10.641 03:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 74360 ']' 00:08:10.641 03:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 74360 00:08:10.641 03:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:10.641 03:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:10.641 03:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74360 00:08:10.641 killing process with pid 74360 00:08:10.641 03:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:10.641 03:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:10.641 03:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74360' 00:08:10.641 03:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 74360 00:08:10.641 [2024-11-18 03:08:14.119718] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:10.641 03:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 74360 00:08:10.641 [2024-11-18 03:08:14.120801] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:10.898 03:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:10.898 00:08:10.898 real 0m3.930s 00:08:10.898 user 0m6.209s 00:08:10.898 sys 0m0.777s 00:08:10.898 03:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.898 03:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.898 ************************************ 00:08:10.898 END TEST raid_state_function_test_sb 00:08:10.898 ************************************ 00:08:10.898 03:08:14 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:10.898 03:08:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:10.898 03:08:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.898 03:08:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:10.898 ************************************ 00:08:10.898 START TEST raid_superblock_test 00:08:10.898 ************************************ 00:08:10.898 03:08:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:08:10.898 03:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:10.898 03:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:10.898 03:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:10.898 03:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:10.898 03:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:10.898 03:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:10.898 03:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:10.898 03:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:10.898 03:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:10.898 03:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:10.898 03:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:10.898 03:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:10.898 03:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:10.898 03:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:10.898 03:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:10.898 03:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74601 00:08:10.898 03:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:10.898 03:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74601 00:08:10.898 03:08:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74601 ']' 00:08:10.899 03:08:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.899 03:08:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.899 03:08:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.899 03:08:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.899 03:08:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.156 [2024-11-18 03:08:14.522153] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:11.156 [2024-11-18 03:08:14.522293] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74601 ] 00:08:11.156 [2024-11-18 03:08:14.663309] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.156 [2024-11-18 03:08:14.713546] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.413 [2024-11-18 03:08:14.756473] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.413 [2024-11-18 03:08:14.756517] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.979 malloc1 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.979 [2024-11-18 03:08:15.379841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:11.979 [2024-11-18 03:08:15.379940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.979 [2024-11-18 03:08:15.379969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:11.979 [2024-11-18 03:08:15.380017] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.979 [2024-11-18 03:08:15.382324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.979 [2024-11-18 03:08:15.382367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:11.979 pt1 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.979 malloc2 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.979 [2024-11-18 03:08:15.420219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:11.979 [2024-11-18 03:08:15.420301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.979 [2024-11-18 03:08:15.420323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:11.979 [2024-11-18 03:08:15.420335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.979 [2024-11-18 03:08:15.422736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.979 [2024-11-18 03:08:15.422782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:11.979 pt2 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.979 [2024-11-18 03:08:15.432244] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:11.979 [2024-11-18 03:08:15.434349] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:11.979 [2024-11-18 03:08:15.434505] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:11.979 [2024-11-18 03:08:15.434521] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:11.979 [2024-11-18 03:08:15.434845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:11.979 [2024-11-18 03:08:15.435024] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:11.979 [2024-11-18 03:08:15.435044] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:11.979 [2024-11-18 03:08:15.435217] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.979 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.979 "name": "raid_bdev1", 00:08:11.979 "uuid": "296b25c2-af73-4667-94b9-b0b233c23685", 00:08:11.979 "strip_size_kb": 0, 00:08:11.979 "state": "online", 00:08:11.979 "raid_level": "raid1", 00:08:11.979 "superblock": true, 00:08:11.979 "num_base_bdevs": 2, 00:08:11.979 "num_base_bdevs_discovered": 2, 00:08:11.979 "num_base_bdevs_operational": 2, 00:08:11.979 "base_bdevs_list": [ 00:08:11.979 { 00:08:11.979 "name": "pt1", 00:08:11.979 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:11.979 "is_configured": true, 00:08:11.979 "data_offset": 2048, 00:08:11.979 "data_size": 63488 00:08:11.979 }, 00:08:11.979 { 00:08:11.979 "name": "pt2", 00:08:11.979 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:11.979 "is_configured": true, 00:08:11.979 "data_offset": 2048, 00:08:11.979 "data_size": 63488 00:08:11.979 } 00:08:11.980 ] 00:08:11.980 }' 00:08:11.980 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.980 03:08:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.547 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:12.547 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:12.547 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:12.547 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:12.547 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:12.547 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:12.547 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:12.547 03:08:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.547 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:12.547 03:08:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.547 [2024-11-18 03:08:15.867802] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:12.547 03:08:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.547 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:12.547 "name": "raid_bdev1", 00:08:12.547 "aliases": [ 00:08:12.547 "296b25c2-af73-4667-94b9-b0b233c23685" 00:08:12.547 ], 00:08:12.547 "product_name": "Raid Volume", 00:08:12.547 "block_size": 512, 00:08:12.547 "num_blocks": 63488, 00:08:12.547 "uuid": "296b25c2-af73-4667-94b9-b0b233c23685", 00:08:12.547 "assigned_rate_limits": { 00:08:12.547 "rw_ios_per_sec": 0, 00:08:12.547 "rw_mbytes_per_sec": 0, 00:08:12.547 "r_mbytes_per_sec": 0, 00:08:12.547 "w_mbytes_per_sec": 0 00:08:12.547 }, 00:08:12.547 "claimed": false, 00:08:12.547 "zoned": false, 00:08:12.547 "supported_io_types": { 00:08:12.547 "read": true, 00:08:12.547 "write": true, 00:08:12.547 "unmap": false, 00:08:12.547 "flush": false, 00:08:12.547 "reset": true, 00:08:12.547 "nvme_admin": false, 00:08:12.547 "nvme_io": false, 00:08:12.547 "nvme_io_md": false, 00:08:12.547 "write_zeroes": true, 00:08:12.547 "zcopy": false, 00:08:12.547 "get_zone_info": false, 00:08:12.547 "zone_management": false, 00:08:12.547 "zone_append": false, 00:08:12.547 "compare": false, 00:08:12.547 "compare_and_write": false, 00:08:12.547 "abort": false, 00:08:12.547 "seek_hole": false, 00:08:12.547 "seek_data": false, 00:08:12.547 "copy": false, 00:08:12.547 "nvme_iov_md": false 00:08:12.547 }, 00:08:12.547 "memory_domains": [ 00:08:12.547 { 00:08:12.547 "dma_device_id": "system", 00:08:12.547 "dma_device_type": 1 00:08:12.547 }, 00:08:12.547 { 00:08:12.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.547 "dma_device_type": 2 00:08:12.547 }, 00:08:12.547 { 00:08:12.547 "dma_device_id": "system", 00:08:12.547 "dma_device_type": 1 00:08:12.547 }, 00:08:12.547 { 00:08:12.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.547 "dma_device_type": 2 00:08:12.547 } 00:08:12.547 ], 00:08:12.547 "driver_specific": { 00:08:12.547 "raid": { 00:08:12.547 "uuid": "296b25c2-af73-4667-94b9-b0b233c23685", 00:08:12.547 "strip_size_kb": 0, 00:08:12.547 "state": "online", 00:08:12.547 "raid_level": "raid1", 00:08:12.547 "superblock": true, 00:08:12.547 "num_base_bdevs": 2, 00:08:12.547 "num_base_bdevs_discovered": 2, 00:08:12.547 "num_base_bdevs_operational": 2, 00:08:12.547 "base_bdevs_list": [ 00:08:12.547 { 00:08:12.547 "name": "pt1", 00:08:12.547 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:12.547 "is_configured": true, 00:08:12.547 "data_offset": 2048, 00:08:12.547 "data_size": 63488 00:08:12.547 }, 00:08:12.547 { 00:08:12.547 "name": "pt2", 00:08:12.547 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:12.547 "is_configured": true, 00:08:12.547 "data_offset": 2048, 00:08:12.547 "data_size": 63488 00:08:12.547 } 00:08:12.547 ] 00:08:12.547 } 00:08:12.547 } 00:08:12.547 }' 00:08:12.547 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:12.547 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:12.547 pt2' 00:08:12.547 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.547 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:12.547 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.547 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:12.547 03:08:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.547 03:08:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.547 03:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.547 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.547 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.547 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.547 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.547 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:12.547 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.547 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.547 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.547 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.547 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.547 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.547 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:12.548 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:12.548 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.548 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.548 [2024-11-18 03:08:16.107405] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=296b25c2-af73-4667-94b9-b0b233c23685 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 296b25c2-af73-4667-94b9-b0b233c23685 ']' 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.808 [2024-11-18 03:08:16.151016] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:12.808 [2024-11-18 03:08:16.151051] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:12.808 [2024-11-18 03:08:16.151165] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:12.808 [2024-11-18 03:08:16.151264] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:12.808 [2024-11-18 03:08:16.151275] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.808 [2024-11-18 03:08:16.266866] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:12.808 [2024-11-18 03:08:16.269044] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:12.808 [2024-11-18 03:08:16.269135] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:12.808 [2024-11-18 03:08:16.269185] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:12.808 [2024-11-18 03:08:16.269203] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:12.808 [2024-11-18 03:08:16.269220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:08:12.808 request: 00:08:12.808 { 00:08:12.808 "name": "raid_bdev1", 00:08:12.808 "raid_level": "raid1", 00:08:12.808 "base_bdevs": [ 00:08:12.808 "malloc1", 00:08:12.808 "malloc2" 00:08:12.808 ], 00:08:12.808 "superblock": false, 00:08:12.808 "method": "bdev_raid_create", 00:08:12.808 "req_id": 1 00:08:12.808 } 00:08:12.808 Got JSON-RPC error response 00:08:12.808 response: 00:08:12.808 { 00:08:12.808 "code": -17, 00:08:12.808 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:12.808 } 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.808 [2024-11-18 03:08:16.314753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:12.808 [2024-11-18 03:08:16.314844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.808 [2024-11-18 03:08:16.314867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:12.808 [2024-11-18 03:08:16.314877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.808 [2024-11-18 03:08:16.317239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.808 [2024-11-18 03:08:16.317282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:12.808 [2024-11-18 03:08:16.317369] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:12.808 [2024-11-18 03:08:16.317411] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:12.808 pt1 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:12.808 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.809 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.809 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.809 "name": "raid_bdev1", 00:08:12.809 "uuid": "296b25c2-af73-4667-94b9-b0b233c23685", 00:08:12.809 "strip_size_kb": 0, 00:08:12.809 "state": "configuring", 00:08:12.809 "raid_level": "raid1", 00:08:12.809 "superblock": true, 00:08:12.809 "num_base_bdevs": 2, 00:08:12.809 "num_base_bdevs_discovered": 1, 00:08:12.809 "num_base_bdevs_operational": 2, 00:08:12.809 "base_bdevs_list": [ 00:08:12.809 { 00:08:12.809 "name": "pt1", 00:08:12.809 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:12.809 "is_configured": true, 00:08:12.809 "data_offset": 2048, 00:08:12.809 "data_size": 63488 00:08:12.809 }, 00:08:12.809 { 00:08:12.809 "name": null, 00:08:12.809 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:12.809 "is_configured": false, 00:08:12.809 "data_offset": 2048, 00:08:12.809 "data_size": 63488 00:08:12.809 } 00:08:12.809 ] 00:08:12.809 }' 00:08:12.809 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.809 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.377 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:13.377 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:13.377 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:13.377 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:13.377 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.377 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.378 [2024-11-18 03:08:16.766084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:13.378 [2024-11-18 03:08:16.766164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.378 [2024-11-18 03:08:16.766191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:13.378 [2024-11-18 03:08:16.766202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.378 [2024-11-18 03:08:16.766654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.378 [2024-11-18 03:08:16.766682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:13.378 [2024-11-18 03:08:16.766766] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:13.378 [2024-11-18 03:08:16.766787] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:13.378 [2024-11-18 03:08:16.766882] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:13.378 [2024-11-18 03:08:16.766899] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:13.378 [2024-11-18 03:08:16.767172] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:13.378 [2024-11-18 03:08:16.767309] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:13.378 [2024-11-18 03:08:16.767336] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:13.378 [2024-11-18 03:08:16.767450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.378 pt2 00:08:13.378 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.378 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:13.378 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:13.378 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:13.378 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.378 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.378 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:13.378 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:13.378 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.378 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.378 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.378 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.378 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.378 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.378 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.378 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.378 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.378 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.378 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.378 "name": "raid_bdev1", 00:08:13.378 "uuid": "296b25c2-af73-4667-94b9-b0b233c23685", 00:08:13.378 "strip_size_kb": 0, 00:08:13.378 "state": "online", 00:08:13.378 "raid_level": "raid1", 00:08:13.378 "superblock": true, 00:08:13.378 "num_base_bdevs": 2, 00:08:13.378 "num_base_bdevs_discovered": 2, 00:08:13.378 "num_base_bdevs_operational": 2, 00:08:13.378 "base_bdevs_list": [ 00:08:13.378 { 00:08:13.378 "name": "pt1", 00:08:13.378 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:13.378 "is_configured": true, 00:08:13.378 "data_offset": 2048, 00:08:13.378 "data_size": 63488 00:08:13.378 }, 00:08:13.378 { 00:08:13.378 "name": "pt2", 00:08:13.378 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:13.378 "is_configured": true, 00:08:13.378 "data_offset": 2048, 00:08:13.378 "data_size": 63488 00:08:13.378 } 00:08:13.378 ] 00:08:13.378 }' 00:08:13.378 03:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.378 03:08:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.948 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:13.948 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:13.948 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:13.948 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:13.948 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:13.948 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:13.948 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:13.948 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:13.948 03:08:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.948 03:08:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.948 [2024-11-18 03:08:17.249537] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.948 03:08:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.948 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:13.948 "name": "raid_bdev1", 00:08:13.948 "aliases": [ 00:08:13.948 "296b25c2-af73-4667-94b9-b0b233c23685" 00:08:13.948 ], 00:08:13.948 "product_name": "Raid Volume", 00:08:13.948 "block_size": 512, 00:08:13.948 "num_blocks": 63488, 00:08:13.948 "uuid": "296b25c2-af73-4667-94b9-b0b233c23685", 00:08:13.948 "assigned_rate_limits": { 00:08:13.948 "rw_ios_per_sec": 0, 00:08:13.948 "rw_mbytes_per_sec": 0, 00:08:13.948 "r_mbytes_per_sec": 0, 00:08:13.948 "w_mbytes_per_sec": 0 00:08:13.948 }, 00:08:13.948 "claimed": false, 00:08:13.948 "zoned": false, 00:08:13.948 "supported_io_types": { 00:08:13.948 "read": true, 00:08:13.948 "write": true, 00:08:13.948 "unmap": false, 00:08:13.948 "flush": false, 00:08:13.948 "reset": true, 00:08:13.948 "nvme_admin": false, 00:08:13.948 "nvme_io": false, 00:08:13.948 "nvme_io_md": false, 00:08:13.948 "write_zeroes": true, 00:08:13.948 "zcopy": false, 00:08:13.948 "get_zone_info": false, 00:08:13.948 "zone_management": false, 00:08:13.948 "zone_append": false, 00:08:13.948 "compare": false, 00:08:13.948 "compare_and_write": false, 00:08:13.948 "abort": false, 00:08:13.948 "seek_hole": false, 00:08:13.948 "seek_data": false, 00:08:13.948 "copy": false, 00:08:13.948 "nvme_iov_md": false 00:08:13.948 }, 00:08:13.948 "memory_domains": [ 00:08:13.948 { 00:08:13.948 "dma_device_id": "system", 00:08:13.948 "dma_device_type": 1 00:08:13.948 }, 00:08:13.948 { 00:08:13.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.948 "dma_device_type": 2 00:08:13.948 }, 00:08:13.948 { 00:08:13.948 "dma_device_id": "system", 00:08:13.948 "dma_device_type": 1 00:08:13.948 }, 00:08:13.948 { 00:08:13.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.948 "dma_device_type": 2 00:08:13.948 } 00:08:13.948 ], 00:08:13.948 "driver_specific": { 00:08:13.948 "raid": { 00:08:13.948 "uuid": "296b25c2-af73-4667-94b9-b0b233c23685", 00:08:13.948 "strip_size_kb": 0, 00:08:13.948 "state": "online", 00:08:13.948 "raid_level": "raid1", 00:08:13.948 "superblock": true, 00:08:13.948 "num_base_bdevs": 2, 00:08:13.948 "num_base_bdevs_discovered": 2, 00:08:13.948 "num_base_bdevs_operational": 2, 00:08:13.948 "base_bdevs_list": [ 00:08:13.948 { 00:08:13.948 "name": "pt1", 00:08:13.948 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:13.948 "is_configured": true, 00:08:13.948 "data_offset": 2048, 00:08:13.948 "data_size": 63488 00:08:13.948 }, 00:08:13.948 { 00:08:13.948 "name": "pt2", 00:08:13.948 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:13.948 "is_configured": true, 00:08:13.948 "data_offset": 2048, 00:08:13.948 "data_size": 63488 00:08:13.948 } 00:08:13.948 ] 00:08:13.948 } 00:08:13.948 } 00:08:13.948 }' 00:08:13.948 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:13.949 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:13.949 pt2' 00:08:13.949 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.949 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:13.949 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.949 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:13.949 03:08:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.949 03:08:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.949 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.949 03:08:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.949 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.949 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.949 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.949 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:13.949 03:08:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.949 03:08:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.949 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.949 03:08:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.949 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.949 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.949 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:13.949 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:13.949 03:08:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.949 03:08:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.949 [2024-11-18 03:08:17.493102] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.949 03:08:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.208 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 296b25c2-af73-4667-94b9-b0b233c23685 '!=' 296b25c2-af73-4667-94b9-b0b233c23685 ']' 00:08:14.208 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:14.208 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:14.208 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:14.208 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:14.208 03:08:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.208 03:08:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.208 [2024-11-18 03:08:17.536776] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:14.208 03:08:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.208 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:14.208 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.208 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.208 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:14.208 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:14.208 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:14.208 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.208 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.208 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.208 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.208 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.208 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.208 03:08:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.208 03:08:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.208 03:08:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.208 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.208 "name": "raid_bdev1", 00:08:14.208 "uuid": "296b25c2-af73-4667-94b9-b0b233c23685", 00:08:14.208 "strip_size_kb": 0, 00:08:14.208 "state": "online", 00:08:14.208 "raid_level": "raid1", 00:08:14.208 "superblock": true, 00:08:14.208 "num_base_bdevs": 2, 00:08:14.208 "num_base_bdevs_discovered": 1, 00:08:14.208 "num_base_bdevs_operational": 1, 00:08:14.208 "base_bdevs_list": [ 00:08:14.208 { 00:08:14.208 "name": null, 00:08:14.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.208 "is_configured": false, 00:08:14.208 "data_offset": 0, 00:08:14.208 "data_size": 63488 00:08:14.208 }, 00:08:14.208 { 00:08:14.208 "name": "pt2", 00:08:14.208 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.208 "is_configured": true, 00:08:14.208 "data_offset": 2048, 00:08:14.208 "data_size": 63488 00:08:14.208 } 00:08:14.208 ] 00:08:14.208 }' 00:08:14.208 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.209 03:08:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.468 03:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:14.468 03:08:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.468 03:08:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.468 [2024-11-18 03:08:18.003929] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:14.468 [2024-11-18 03:08:18.003981] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:14.468 [2024-11-18 03:08:18.004096] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:14.468 [2024-11-18 03:08:18.004155] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:14.468 [2024-11-18 03:08:18.004166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:14.468 03:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.468 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.468 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:14.468 03:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.468 03:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.468 03:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.727 [2024-11-18 03:08:18.075816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:14.727 [2024-11-18 03:08:18.075887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.727 [2024-11-18 03:08:18.075908] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:14.727 [2024-11-18 03:08:18.075918] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.727 [2024-11-18 03:08:18.078350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.727 [2024-11-18 03:08:18.078405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:14.727 [2024-11-18 03:08:18.078515] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:14.727 [2024-11-18 03:08:18.078546] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:14.727 [2024-11-18 03:08:18.078629] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:14.727 [2024-11-18 03:08:18.078638] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:14.727 [2024-11-18 03:08:18.078886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:14.727 [2024-11-18 03:08:18.079051] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:14.727 [2024-11-18 03:08:18.079072] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:14.727 [2024-11-18 03:08:18.079200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.727 pt2 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.727 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.727 "name": "raid_bdev1", 00:08:14.727 "uuid": "296b25c2-af73-4667-94b9-b0b233c23685", 00:08:14.727 "strip_size_kb": 0, 00:08:14.727 "state": "online", 00:08:14.727 "raid_level": "raid1", 00:08:14.727 "superblock": true, 00:08:14.727 "num_base_bdevs": 2, 00:08:14.727 "num_base_bdevs_discovered": 1, 00:08:14.728 "num_base_bdevs_operational": 1, 00:08:14.728 "base_bdevs_list": [ 00:08:14.728 { 00:08:14.728 "name": null, 00:08:14.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.728 "is_configured": false, 00:08:14.728 "data_offset": 2048, 00:08:14.728 "data_size": 63488 00:08:14.728 }, 00:08:14.728 { 00:08:14.728 "name": "pt2", 00:08:14.728 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.728 "is_configured": true, 00:08:14.728 "data_offset": 2048, 00:08:14.728 "data_size": 63488 00:08:14.728 } 00:08:14.728 ] 00:08:14.728 }' 00:08:14.728 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.728 03:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.987 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:14.987 03:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.987 03:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.987 [2024-11-18 03:08:18.511105] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:14.987 [2024-11-18 03:08:18.511149] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:14.987 [2024-11-18 03:08:18.511235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:14.987 [2024-11-18 03:08:18.511283] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:14.987 [2024-11-18 03:08:18.511298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:14.987 03:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.987 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.987 03:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.987 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:14.987 03:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.987 03:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.247 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:15.247 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:15.247 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:15.247 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:15.247 03:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.247 03:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.247 [2024-11-18 03:08:18.570949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:15.247 [2024-11-18 03:08:18.571064] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.247 [2024-11-18 03:08:18.571090] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:15.247 [2024-11-18 03:08:18.571109] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.247 [2024-11-18 03:08:18.573489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.247 [2024-11-18 03:08:18.573539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:15.247 [2024-11-18 03:08:18.573625] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:15.247 [2024-11-18 03:08:18.573667] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:15.247 [2024-11-18 03:08:18.573772] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:15.247 [2024-11-18 03:08:18.573804] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:15.247 [2024-11-18 03:08:18.573822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:08:15.247 [2024-11-18 03:08:18.573858] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:15.247 [2024-11-18 03:08:18.573937] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:08:15.247 [2024-11-18 03:08:18.573975] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:15.247 [2024-11-18 03:08:18.574233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:15.247 [2024-11-18 03:08:18.574364] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:08:15.247 [2024-11-18 03:08:18.574378] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:08:15.247 [2024-11-18 03:08:18.574504] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.247 pt1 00:08:15.247 03:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.247 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:15.247 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:15.247 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:15.247 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.247 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:15.247 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:15.247 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:15.247 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.247 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.247 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.247 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.247 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.247 03:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.247 03:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.247 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.247 03:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.247 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.247 "name": "raid_bdev1", 00:08:15.247 "uuid": "296b25c2-af73-4667-94b9-b0b233c23685", 00:08:15.247 "strip_size_kb": 0, 00:08:15.247 "state": "online", 00:08:15.247 "raid_level": "raid1", 00:08:15.247 "superblock": true, 00:08:15.247 "num_base_bdevs": 2, 00:08:15.247 "num_base_bdevs_discovered": 1, 00:08:15.247 "num_base_bdevs_operational": 1, 00:08:15.247 "base_bdevs_list": [ 00:08:15.247 { 00:08:15.247 "name": null, 00:08:15.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.247 "is_configured": false, 00:08:15.247 "data_offset": 2048, 00:08:15.247 "data_size": 63488 00:08:15.247 }, 00:08:15.247 { 00:08:15.247 "name": "pt2", 00:08:15.247 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.247 "is_configured": true, 00:08:15.247 "data_offset": 2048, 00:08:15.247 "data_size": 63488 00:08:15.247 } 00:08:15.247 ] 00:08:15.247 }' 00:08:15.247 03:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.247 03:08:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.507 03:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:15.507 03:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.507 03:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.507 03:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:15.507 03:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.768 03:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:15.768 03:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:15.768 03:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:15.768 03:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.768 03:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.768 [2024-11-18 03:08:19.106338] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.768 03:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.768 03:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 296b25c2-af73-4667-94b9-b0b233c23685 '!=' 296b25c2-af73-4667-94b9-b0b233c23685 ']' 00:08:15.768 03:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74601 00:08:15.768 03:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74601 ']' 00:08:15.768 03:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74601 00:08:15.768 03:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:15.768 03:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:15.768 03:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74601 00:08:15.768 killing process with pid 74601 00:08:15.768 03:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:15.768 03:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:15.768 03:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74601' 00:08:15.768 03:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74601 00:08:15.768 03:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74601 00:08:15.768 [2024-11-18 03:08:19.187522] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:15.768 [2024-11-18 03:08:19.187620] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:15.768 [2024-11-18 03:08:19.187677] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:15.768 [2024-11-18 03:08:19.187687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:08:15.768 [2024-11-18 03:08:19.211163] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:16.025 03:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:16.025 00:08:16.025 real 0m5.020s 00:08:16.025 user 0m8.203s 00:08:16.025 sys 0m1.044s 00:08:16.025 03:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.025 03:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.025 ************************************ 00:08:16.025 END TEST raid_superblock_test 00:08:16.025 ************************************ 00:08:16.025 03:08:19 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:16.025 03:08:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:16.025 03:08:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.025 03:08:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:16.025 ************************************ 00:08:16.025 START TEST raid_read_error_test 00:08:16.025 ************************************ 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.gf7pPL09NY 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74920 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74920 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 74920 ']' 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:16.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:16.025 03:08:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.284 [2024-11-18 03:08:19.631734] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:16.285 [2024-11-18 03:08:19.631878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74920 ] 00:08:16.285 [2024-11-18 03:08:19.792579] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.285 [2024-11-18 03:08:19.843513] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.543 [2024-11-18 03:08:19.886265] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.543 [2024-11-18 03:08:19.886310] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.113 BaseBdev1_malloc 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.113 true 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.113 [2024-11-18 03:08:20.516759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:17.113 [2024-11-18 03:08:20.516825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.113 [2024-11-18 03:08:20.516848] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:17.113 [2024-11-18 03:08:20.516857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.113 [2024-11-18 03:08:20.519222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.113 [2024-11-18 03:08:20.519267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:17.113 BaseBdev1 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.113 BaseBdev2_malloc 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.113 true 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.113 [2024-11-18 03:08:20.569186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:17.113 [2024-11-18 03:08:20.569282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.113 [2024-11-18 03:08:20.569308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:17.113 [2024-11-18 03:08:20.569318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.113 [2024-11-18 03:08:20.571632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.113 [2024-11-18 03:08:20.571676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:17.113 BaseBdev2 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.113 [2024-11-18 03:08:20.581188] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:17.113 [2024-11-18 03:08:20.583261] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:17.113 [2024-11-18 03:08:20.583478] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:17.113 [2024-11-18 03:08:20.583492] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:17.113 [2024-11-18 03:08:20.583815] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:17.113 [2024-11-18 03:08:20.584007] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:17.113 [2024-11-18 03:08:20.584030] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:17.113 [2024-11-18 03:08:20.584196] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:17.113 03:08:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.114 03:08:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:17.114 03:08:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:17.114 03:08:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.114 03:08:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.114 03:08:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.114 03:08:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.114 03:08:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.114 03:08:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:17.114 03:08:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.114 03:08:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.114 03:08:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.114 03:08:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.114 03:08:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.114 "name": "raid_bdev1", 00:08:17.114 "uuid": "929dfe17-a1d2-45cf-96ab-2d78cf9e7fd5", 00:08:17.114 "strip_size_kb": 0, 00:08:17.114 "state": "online", 00:08:17.114 "raid_level": "raid1", 00:08:17.114 "superblock": true, 00:08:17.114 "num_base_bdevs": 2, 00:08:17.114 "num_base_bdevs_discovered": 2, 00:08:17.114 "num_base_bdevs_operational": 2, 00:08:17.114 "base_bdevs_list": [ 00:08:17.114 { 00:08:17.114 "name": "BaseBdev1", 00:08:17.114 "uuid": "b0744cc9-aafe-56dc-8d25-2d889bc99790", 00:08:17.114 "is_configured": true, 00:08:17.114 "data_offset": 2048, 00:08:17.114 "data_size": 63488 00:08:17.114 }, 00:08:17.114 { 00:08:17.114 "name": "BaseBdev2", 00:08:17.114 "uuid": "28c03405-0aaa-50c2-8bad-534b71ec44ae", 00:08:17.114 "is_configured": true, 00:08:17.114 "data_offset": 2048, 00:08:17.114 "data_size": 63488 00:08:17.114 } 00:08:17.114 ] 00:08:17.114 }' 00:08:17.114 03:08:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.114 03:08:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.683 03:08:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:17.683 03:08:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:17.683 [2024-11-18 03:08:21.108664] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:18.680 03:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:18.680 03:08:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.680 03:08:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.680 03:08:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.680 03:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:18.680 03:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:18.680 03:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:18.680 03:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:18.680 03:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:18.680 03:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:18.680 03:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.680 03:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:18.680 03:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:18.680 03:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:18.680 03:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.680 03:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.680 03:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.680 03:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.680 03:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.680 03:08:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.680 03:08:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.680 03:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.680 03:08:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.680 03:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.680 "name": "raid_bdev1", 00:08:18.680 "uuid": "929dfe17-a1d2-45cf-96ab-2d78cf9e7fd5", 00:08:18.680 "strip_size_kb": 0, 00:08:18.680 "state": "online", 00:08:18.680 "raid_level": "raid1", 00:08:18.680 "superblock": true, 00:08:18.680 "num_base_bdevs": 2, 00:08:18.680 "num_base_bdevs_discovered": 2, 00:08:18.681 "num_base_bdevs_operational": 2, 00:08:18.681 "base_bdevs_list": [ 00:08:18.681 { 00:08:18.681 "name": "BaseBdev1", 00:08:18.681 "uuid": "b0744cc9-aafe-56dc-8d25-2d889bc99790", 00:08:18.681 "is_configured": true, 00:08:18.681 "data_offset": 2048, 00:08:18.681 "data_size": 63488 00:08:18.681 }, 00:08:18.681 { 00:08:18.681 "name": "BaseBdev2", 00:08:18.681 "uuid": "28c03405-0aaa-50c2-8bad-534b71ec44ae", 00:08:18.681 "is_configured": true, 00:08:18.681 "data_offset": 2048, 00:08:18.681 "data_size": 63488 00:08:18.681 } 00:08:18.681 ] 00:08:18.681 }' 00:08:18.681 03:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.681 03:08:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.939 03:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:18.939 03:08:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.939 03:08:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.939 [2024-11-18 03:08:22.480736] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:18.939 [2024-11-18 03:08:22.480785] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:18.939 [2024-11-18 03:08:22.483715] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:18.939 [2024-11-18 03:08:22.483772] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.939 [2024-11-18 03:08:22.483869] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:18.939 [2024-11-18 03:08:22.483886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:18.939 { 00:08:18.939 "results": [ 00:08:18.939 { 00:08:18.939 "job": "raid_bdev1", 00:08:18.939 "core_mask": "0x1", 00:08:18.939 "workload": "randrw", 00:08:18.939 "percentage": 50, 00:08:18.939 "status": "finished", 00:08:18.939 "queue_depth": 1, 00:08:18.939 "io_size": 131072, 00:08:18.939 "runtime": 1.372785, 00:08:18.939 "iops": 17980.237254923384, 00:08:18.939 "mibps": 2247.529656865423, 00:08:18.939 "io_failed": 0, 00:08:18.939 "io_timeout": 0, 00:08:18.939 "avg_latency_us": 52.92550688582757, 00:08:18.939 "min_latency_us": 23.58777292576419, 00:08:18.939 "max_latency_us": 1480.9991266375546 00:08:18.939 } 00:08:18.939 ], 00:08:18.939 "core_count": 1 00:08:18.939 } 00:08:18.939 03:08:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.939 03:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74920 00:08:18.939 03:08:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 74920 ']' 00:08:18.939 03:08:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 74920 00:08:18.939 03:08:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:18.939 03:08:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:18.939 03:08:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74920 00:08:19.198 03:08:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:19.198 03:08:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:19.198 killing process with pid 74920 00:08:19.198 03:08:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74920' 00:08:19.198 03:08:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 74920 00:08:19.198 [2024-11-18 03:08:22.531801] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:19.198 03:08:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 74920 00:08:19.198 [2024-11-18 03:08:22.547919] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:19.457 03:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:19.457 03:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.gf7pPL09NY 00:08:19.457 03:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:19.457 03:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:19.457 03:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:19.457 03:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:19.457 03:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:19.457 03:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:19.457 00:08:19.457 real 0m3.267s 00:08:19.457 user 0m4.136s 00:08:19.457 sys 0m0.532s 00:08:19.457 03:08:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.457 03:08:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.457 ************************************ 00:08:19.457 END TEST raid_read_error_test 00:08:19.457 ************************************ 00:08:19.457 03:08:22 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:19.457 03:08:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:19.457 03:08:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.457 03:08:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:19.457 ************************************ 00:08:19.457 START TEST raid_write_error_test 00:08:19.457 ************************************ 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QAPpTDhzUp 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75049 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75049 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 75049 ']' 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:19.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:19.457 03:08:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.457 [2024-11-18 03:08:22.966573] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:19.457 [2024-11-18 03:08:22.966723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75049 ] 00:08:19.716 [2024-11-18 03:08:23.128802] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.716 [2024-11-18 03:08:23.179154] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.716 [2024-11-18 03:08:23.221835] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.716 [2024-11-18 03:08:23.221879] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.282 03:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:20.282 03:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:20.282 03:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:20.282 03:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:20.282 03:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.282 03:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.282 BaseBdev1_malloc 00:08:20.282 03:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.282 03:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:20.282 03:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.282 03:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.282 true 00:08:20.282 03:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.282 03:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:20.282 03:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.282 03:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.282 [2024-11-18 03:08:23.852554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:20.282 [2024-11-18 03:08:23.852641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.282 [2024-11-18 03:08:23.852678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:20.282 [2024-11-18 03:08:23.852691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.282 [2024-11-18 03:08:23.855138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.282 [2024-11-18 03:08:23.855189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:20.282 BaseBdev1 00:08:20.282 03:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.542 BaseBdev2_malloc 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.542 true 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.542 [2024-11-18 03:08:23.900530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:20.542 [2024-11-18 03:08:23.900596] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.542 [2024-11-18 03:08:23.900617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:20.542 [2024-11-18 03:08:23.900625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.542 [2024-11-18 03:08:23.902871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.542 [2024-11-18 03:08:23.902911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:20.542 BaseBdev2 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.542 [2024-11-18 03:08:23.912558] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:20.542 [2024-11-18 03:08:23.914576] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:20.542 [2024-11-18 03:08:23.914757] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:20.542 [2024-11-18 03:08:23.914770] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:20.542 [2024-11-18 03:08:23.915092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:20.542 [2024-11-18 03:08:23.915278] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:20.542 [2024-11-18 03:08:23.915306] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:20.542 [2024-11-18 03:08:23.915478] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.542 "name": "raid_bdev1", 00:08:20.542 "uuid": "991652d1-34fb-4e65-8a5a-8b2361a0f554", 00:08:20.542 "strip_size_kb": 0, 00:08:20.542 "state": "online", 00:08:20.542 "raid_level": "raid1", 00:08:20.542 "superblock": true, 00:08:20.542 "num_base_bdevs": 2, 00:08:20.542 "num_base_bdevs_discovered": 2, 00:08:20.542 "num_base_bdevs_operational": 2, 00:08:20.542 "base_bdevs_list": [ 00:08:20.542 { 00:08:20.542 "name": "BaseBdev1", 00:08:20.542 "uuid": "5c1923db-a33b-5a31-ad33-434097c89be9", 00:08:20.542 "is_configured": true, 00:08:20.542 "data_offset": 2048, 00:08:20.542 "data_size": 63488 00:08:20.542 }, 00:08:20.542 { 00:08:20.542 "name": "BaseBdev2", 00:08:20.542 "uuid": "4ea35873-fc55-5442-bfa5-52f9d59c21c5", 00:08:20.542 "is_configured": true, 00:08:20.542 "data_offset": 2048, 00:08:20.542 "data_size": 63488 00:08:20.542 } 00:08:20.542 ] 00:08:20.542 }' 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.542 03:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.802 03:08:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:20.802 03:08:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:21.061 [2024-11-18 03:08:24.452035] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:21.999 03:08:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:21.999 03:08:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.999 03:08:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.999 [2024-11-18 03:08:25.373277] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:21.999 [2024-11-18 03:08:25.373345] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:21.999 [2024-11-18 03:08:25.373551] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:08:21.999 03:08:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.999 03:08:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:21.999 03:08:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:21.999 03:08:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:21.999 03:08:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:21.999 03:08:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:21.999 03:08:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:21.999 03:08:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.999 03:08:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:21.999 03:08:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:21.999 03:08:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:21.999 03:08:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.999 03:08:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.999 03:08:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.999 03:08:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.999 03:08:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.999 03:08:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:21.999 03:08:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.999 03:08:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.000 03:08:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.000 03:08:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.000 "name": "raid_bdev1", 00:08:22.000 "uuid": "991652d1-34fb-4e65-8a5a-8b2361a0f554", 00:08:22.000 "strip_size_kb": 0, 00:08:22.000 "state": "online", 00:08:22.000 "raid_level": "raid1", 00:08:22.000 "superblock": true, 00:08:22.000 "num_base_bdevs": 2, 00:08:22.000 "num_base_bdevs_discovered": 1, 00:08:22.000 "num_base_bdevs_operational": 1, 00:08:22.000 "base_bdevs_list": [ 00:08:22.000 { 00:08:22.000 "name": null, 00:08:22.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.000 "is_configured": false, 00:08:22.000 "data_offset": 0, 00:08:22.000 "data_size": 63488 00:08:22.000 }, 00:08:22.000 { 00:08:22.000 "name": "BaseBdev2", 00:08:22.000 "uuid": "4ea35873-fc55-5442-bfa5-52f9d59c21c5", 00:08:22.000 "is_configured": true, 00:08:22.000 "data_offset": 2048, 00:08:22.000 "data_size": 63488 00:08:22.000 } 00:08:22.000 ] 00:08:22.000 }' 00:08:22.000 03:08:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.000 03:08:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.259 03:08:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:22.259 03:08:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.259 03:08:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.259 [2024-11-18 03:08:25.775416] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:22.259 [2024-11-18 03:08:25.775459] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:22.259 [2024-11-18 03:08:25.778136] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:22.259 [2024-11-18 03:08:25.778193] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.259 [2024-11-18 03:08:25.778245] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:22.259 [2024-11-18 03:08:25.778256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:22.259 { 00:08:22.259 "results": [ 00:08:22.259 { 00:08:22.259 "job": "raid_bdev1", 00:08:22.259 "core_mask": "0x1", 00:08:22.259 "workload": "randrw", 00:08:22.259 "percentage": 50, 00:08:22.259 "status": "finished", 00:08:22.259 "queue_depth": 1, 00:08:22.259 "io_size": 131072, 00:08:22.259 "runtime": 1.323949, 00:08:22.259 "iops": 20060.440394607343, 00:08:22.259 "mibps": 2507.555049325918, 00:08:22.259 "io_failed": 0, 00:08:22.259 "io_timeout": 0, 00:08:22.259 "avg_latency_us": 47.125175143550386, 00:08:22.259 "min_latency_us": 23.475982532751093, 00:08:22.259 "max_latency_us": 1667.0183406113538 00:08:22.259 } 00:08:22.259 ], 00:08:22.259 "core_count": 1 00:08:22.259 } 00:08:22.259 03:08:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.259 03:08:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75049 00:08:22.259 03:08:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 75049 ']' 00:08:22.259 03:08:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 75049 00:08:22.259 03:08:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:22.259 03:08:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:22.259 03:08:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75049 00:08:22.259 03:08:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:22.260 03:08:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:22.260 killing process with pid 75049 00:08:22.260 03:08:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75049' 00:08:22.260 03:08:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 75049 00:08:22.260 [2024-11-18 03:08:25.817730] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:22.260 03:08:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 75049 00:08:22.260 [2024-11-18 03:08:25.833712] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:22.519 03:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QAPpTDhzUp 00:08:22.519 03:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:22.519 03:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:22.519 03:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:22.519 03:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:22.519 03:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:22.519 03:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:22.519 03:08:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:22.519 00:08:22.519 real 0m3.214s 00:08:22.519 user 0m4.074s 00:08:22.519 sys 0m0.510s 00:08:22.519 03:08:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.519 03:08:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.519 ************************************ 00:08:22.519 END TEST raid_write_error_test 00:08:22.519 ************************************ 00:08:22.779 03:08:26 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:22.779 03:08:26 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:22.779 03:08:26 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:22.779 03:08:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:22.779 03:08:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.779 03:08:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:22.779 ************************************ 00:08:22.779 START TEST raid_state_function_test 00:08:22.779 ************************************ 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=75176 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:22.779 Process raid pid: 75176 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75176' 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 75176 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 75176 ']' 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:22.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:22.779 03:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.779 [2024-11-18 03:08:26.241790] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:22.780 [2024-11-18 03:08:26.241928] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.039 [2024-11-18 03:08:26.404200] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.039 [2024-11-18 03:08:26.455108] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.039 [2024-11-18 03:08:26.497766] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.039 [2024-11-18 03:08:26.497807] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.608 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:23.608 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:23.608 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:23.608 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.608 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.608 [2024-11-18 03:08:27.108248] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:23.608 [2024-11-18 03:08:27.108311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:23.608 [2024-11-18 03:08:27.108325] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:23.608 [2024-11-18 03:08:27.108336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:23.608 [2024-11-18 03:08:27.108343] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:23.608 [2024-11-18 03:08:27.108355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:23.608 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.608 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:23.608 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.608 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.608 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.608 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.608 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.608 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.608 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.608 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.608 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.608 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.608 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.608 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.608 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.608 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.608 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.608 "name": "Existed_Raid", 00:08:23.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.608 "strip_size_kb": 64, 00:08:23.609 "state": "configuring", 00:08:23.609 "raid_level": "raid0", 00:08:23.609 "superblock": false, 00:08:23.609 "num_base_bdevs": 3, 00:08:23.609 "num_base_bdevs_discovered": 0, 00:08:23.609 "num_base_bdevs_operational": 3, 00:08:23.609 "base_bdevs_list": [ 00:08:23.609 { 00:08:23.609 "name": "BaseBdev1", 00:08:23.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.609 "is_configured": false, 00:08:23.609 "data_offset": 0, 00:08:23.609 "data_size": 0 00:08:23.609 }, 00:08:23.609 { 00:08:23.609 "name": "BaseBdev2", 00:08:23.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.609 "is_configured": false, 00:08:23.609 "data_offset": 0, 00:08:23.609 "data_size": 0 00:08:23.609 }, 00:08:23.609 { 00:08:23.609 "name": "BaseBdev3", 00:08:23.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.609 "is_configured": false, 00:08:23.609 "data_offset": 0, 00:08:23.609 "data_size": 0 00:08:23.609 } 00:08:23.609 ] 00:08:23.609 }' 00:08:23.609 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.609 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.178 [2024-11-18 03:08:27.571362] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:24.178 [2024-11-18 03:08:27.571413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.178 [2024-11-18 03:08:27.587370] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:24.178 [2024-11-18 03:08:27.587422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:24.178 [2024-11-18 03:08:27.587431] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:24.178 [2024-11-18 03:08:27.587441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:24.178 [2024-11-18 03:08:27.587448] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:24.178 [2024-11-18 03:08:27.587456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.178 [2024-11-18 03:08:27.608624] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:24.178 BaseBdev1 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.178 [ 00:08:24.178 { 00:08:24.178 "name": "BaseBdev1", 00:08:24.178 "aliases": [ 00:08:24.178 "5fe158aa-42c4-4cce-9ab6-c23132576df0" 00:08:24.178 ], 00:08:24.178 "product_name": "Malloc disk", 00:08:24.178 "block_size": 512, 00:08:24.178 "num_blocks": 65536, 00:08:24.178 "uuid": "5fe158aa-42c4-4cce-9ab6-c23132576df0", 00:08:24.178 "assigned_rate_limits": { 00:08:24.178 "rw_ios_per_sec": 0, 00:08:24.178 "rw_mbytes_per_sec": 0, 00:08:24.178 "r_mbytes_per_sec": 0, 00:08:24.178 "w_mbytes_per_sec": 0 00:08:24.178 }, 00:08:24.178 "claimed": true, 00:08:24.178 "claim_type": "exclusive_write", 00:08:24.178 "zoned": false, 00:08:24.178 "supported_io_types": { 00:08:24.178 "read": true, 00:08:24.178 "write": true, 00:08:24.178 "unmap": true, 00:08:24.178 "flush": true, 00:08:24.178 "reset": true, 00:08:24.178 "nvme_admin": false, 00:08:24.178 "nvme_io": false, 00:08:24.178 "nvme_io_md": false, 00:08:24.178 "write_zeroes": true, 00:08:24.178 "zcopy": true, 00:08:24.178 "get_zone_info": false, 00:08:24.178 "zone_management": false, 00:08:24.178 "zone_append": false, 00:08:24.178 "compare": false, 00:08:24.178 "compare_and_write": false, 00:08:24.178 "abort": true, 00:08:24.178 "seek_hole": false, 00:08:24.178 "seek_data": false, 00:08:24.178 "copy": true, 00:08:24.178 "nvme_iov_md": false 00:08:24.178 }, 00:08:24.178 "memory_domains": [ 00:08:24.178 { 00:08:24.178 "dma_device_id": "system", 00:08:24.178 "dma_device_type": 1 00:08:24.178 }, 00:08:24.178 { 00:08:24.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.178 "dma_device_type": 2 00:08:24.178 } 00:08:24.178 ], 00:08:24.178 "driver_specific": {} 00:08:24.178 } 00:08:24.178 ] 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.178 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.178 "name": "Existed_Raid", 00:08:24.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.178 "strip_size_kb": 64, 00:08:24.179 "state": "configuring", 00:08:24.179 "raid_level": "raid0", 00:08:24.179 "superblock": false, 00:08:24.179 "num_base_bdevs": 3, 00:08:24.179 "num_base_bdevs_discovered": 1, 00:08:24.179 "num_base_bdevs_operational": 3, 00:08:24.179 "base_bdevs_list": [ 00:08:24.179 { 00:08:24.179 "name": "BaseBdev1", 00:08:24.179 "uuid": "5fe158aa-42c4-4cce-9ab6-c23132576df0", 00:08:24.179 "is_configured": true, 00:08:24.179 "data_offset": 0, 00:08:24.179 "data_size": 65536 00:08:24.179 }, 00:08:24.179 { 00:08:24.179 "name": "BaseBdev2", 00:08:24.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.179 "is_configured": false, 00:08:24.179 "data_offset": 0, 00:08:24.179 "data_size": 0 00:08:24.179 }, 00:08:24.179 { 00:08:24.179 "name": "BaseBdev3", 00:08:24.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.179 "is_configured": false, 00:08:24.179 "data_offset": 0, 00:08:24.179 "data_size": 0 00:08:24.179 } 00:08:24.179 ] 00:08:24.179 }' 00:08:24.179 03:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.179 03:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.748 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:24.748 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.748 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.748 [2024-11-18 03:08:28.083890] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:24.748 [2024-11-18 03:08:28.083950] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:24.748 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.748 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:24.748 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.748 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.748 [2024-11-18 03:08:28.095910] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:24.748 [2024-11-18 03:08:28.097959] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:24.748 [2024-11-18 03:08:28.098019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:24.748 [2024-11-18 03:08:28.098029] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:24.748 [2024-11-18 03:08:28.098040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:24.748 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.748 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:24.748 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:24.748 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:24.748 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.748 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.748 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.748 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.748 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.748 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.748 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.748 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.748 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.748 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.748 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.748 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.748 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.748 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.748 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.748 "name": "Existed_Raid", 00:08:24.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.748 "strip_size_kb": 64, 00:08:24.748 "state": "configuring", 00:08:24.748 "raid_level": "raid0", 00:08:24.748 "superblock": false, 00:08:24.748 "num_base_bdevs": 3, 00:08:24.748 "num_base_bdevs_discovered": 1, 00:08:24.748 "num_base_bdevs_operational": 3, 00:08:24.748 "base_bdevs_list": [ 00:08:24.748 { 00:08:24.748 "name": "BaseBdev1", 00:08:24.748 "uuid": "5fe158aa-42c4-4cce-9ab6-c23132576df0", 00:08:24.748 "is_configured": true, 00:08:24.748 "data_offset": 0, 00:08:24.748 "data_size": 65536 00:08:24.748 }, 00:08:24.748 { 00:08:24.748 "name": "BaseBdev2", 00:08:24.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.748 "is_configured": false, 00:08:24.748 "data_offset": 0, 00:08:24.748 "data_size": 0 00:08:24.748 }, 00:08:24.748 { 00:08:24.748 "name": "BaseBdev3", 00:08:24.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.748 "is_configured": false, 00:08:24.748 "data_offset": 0, 00:08:24.748 "data_size": 0 00:08:24.748 } 00:08:24.748 ] 00:08:24.748 }' 00:08:24.748 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.748 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.008 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:25.008 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.008 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.008 [2024-11-18 03:08:28.541395] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:25.008 BaseBdev2 00:08:25.008 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.008 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:25.008 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:25.008 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:25.008 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:25.008 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:25.008 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:25.008 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:25.008 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.008 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.008 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.008 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:25.008 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.008 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.008 [ 00:08:25.008 { 00:08:25.008 "name": "BaseBdev2", 00:08:25.008 "aliases": [ 00:08:25.008 "54f7131c-83b3-4f8b-8101-2ba3453e38d6" 00:08:25.008 ], 00:08:25.008 "product_name": "Malloc disk", 00:08:25.008 "block_size": 512, 00:08:25.008 "num_blocks": 65536, 00:08:25.008 "uuid": "54f7131c-83b3-4f8b-8101-2ba3453e38d6", 00:08:25.008 "assigned_rate_limits": { 00:08:25.008 "rw_ios_per_sec": 0, 00:08:25.008 "rw_mbytes_per_sec": 0, 00:08:25.008 "r_mbytes_per_sec": 0, 00:08:25.008 "w_mbytes_per_sec": 0 00:08:25.008 }, 00:08:25.008 "claimed": true, 00:08:25.008 "claim_type": "exclusive_write", 00:08:25.008 "zoned": false, 00:08:25.008 "supported_io_types": { 00:08:25.008 "read": true, 00:08:25.008 "write": true, 00:08:25.008 "unmap": true, 00:08:25.008 "flush": true, 00:08:25.008 "reset": true, 00:08:25.008 "nvme_admin": false, 00:08:25.008 "nvme_io": false, 00:08:25.008 "nvme_io_md": false, 00:08:25.008 "write_zeroes": true, 00:08:25.008 "zcopy": true, 00:08:25.008 "get_zone_info": false, 00:08:25.008 "zone_management": false, 00:08:25.008 "zone_append": false, 00:08:25.008 "compare": false, 00:08:25.008 "compare_and_write": false, 00:08:25.008 "abort": true, 00:08:25.008 "seek_hole": false, 00:08:25.008 "seek_data": false, 00:08:25.008 "copy": true, 00:08:25.008 "nvme_iov_md": false 00:08:25.008 }, 00:08:25.008 "memory_domains": [ 00:08:25.008 { 00:08:25.008 "dma_device_id": "system", 00:08:25.008 "dma_device_type": 1 00:08:25.008 }, 00:08:25.008 { 00:08:25.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.008 "dma_device_type": 2 00:08:25.008 } 00:08:25.008 ], 00:08:25.008 "driver_specific": {} 00:08:25.008 } 00:08:25.008 ] 00:08:25.008 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.008 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:25.008 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:25.008 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:25.268 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.268 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.268 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.268 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.268 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.268 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.268 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.268 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.268 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.268 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.268 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.268 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.268 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.268 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.268 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.268 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.268 "name": "Existed_Raid", 00:08:25.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.269 "strip_size_kb": 64, 00:08:25.269 "state": "configuring", 00:08:25.269 "raid_level": "raid0", 00:08:25.269 "superblock": false, 00:08:25.269 "num_base_bdevs": 3, 00:08:25.269 "num_base_bdevs_discovered": 2, 00:08:25.269 "num_base_bdevs_operational": 3, 00:08:25.269 "base_bdevs_list": [ 00:08:25.269 { 00:08:25.269 "name": "BaseBdev1", 00:08:25.269 "uuid": "5fe158aa-42c4-4cce-9ab6-c23132576df0", 00:08:25.269 "is_configured": true, 00:08:25.269 "data_offset": 0, 00:08:25.269 "data_size": 65536 00:08:25.269 }, 00:08:25.269 { 00:08:25.269 "name": "BaseBdev2", 00:08:25.269 "uuid": "54f7131c-83b3-4f8b-8101-2ba3453e38d6", 00:08:25.269 "is_configured": true, 00:08:25.269 "data_offset": 0, 00:08:25.269 "data_size": 65536 00:08:25.269 }, 00:08:25.269 { 00:08:25.269 "name": "BaseBdev3", 00:08:25.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.269 "is_configured": false, 00:08:25.269 "data_offset": 0, 00:08:25.269 "data_size": 0 00:08:25.269 } 00:08:25.269 ] 00:08:25.269 }' 00:08:25.269 03:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.269 03:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.529 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:25.529 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.529 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.529 [2024-11-18 03:08:29.027649] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:25.529 [2024-11-18 03:08:29.027782] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:25.529 [2024-11-18 03:08:29.027811] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:25.529 [2024-11-18 03:08:29.028174] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:25.529 [2024-11-18 03:08:29.028319] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:25.529 [2024-11-18 03:08:29.028343] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:25.529 [2024-11-18 03:08:29.028560] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.529 BaseBdev3 00:08:25.529 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.529 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:25.529 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:25.529 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:25.529 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:25.529 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:25.530 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:25.530 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:25.530 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.530 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.530 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.530 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:25.530 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.530 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.530 [ 00:08:25.530 { 00:08:25.530 "name": "BaseBdev3", 00:08:25.530 "aliases": [ 00:08:25.530 "e9bd7b34-315a-4077-a48c-af61fefb0a20" 00:08:25.530 ], 00:08:25.530 "product_name": "Malloc disk", 00:08:25.530 "block_size": 512, 00:08:25.530 "num_blocks": 65536, 00:08:25.530 "uuid": "e9bd7b34-315a-4077-a48c-af61fefb0a20", 00:08:25.530 "assigned_rate_limits": { 00:08:25.530 "rw_ios_per_sec": 0, 00:08:25.530 "rw_mbytes_per_sec": 0, 00:08:25.530 "r_mbytes_per_sec": 0, 00:08:25.530 "w_mbytes_per_sec": 0 00:08:25.530 }, 00:08:25.530 "claimed": true, 00:08:25.530 "claim_type": "exclusive_write", 00:08:25.530 "zoned": false, 00:08:25.530 "supported_io_types": { 00:08:25.530 "read": true, 00:08:25.530 "write": true, 00:08:25.530 "unmap": true, 00:08:25.530 "flush": true, 00:08:25.530 "reset": true, 00:08:25.530 "nvme_admin": false, 00:08:25.530 "nvme_io": false, 00:08:25.530 "nvme_io_md": false, 00:08:25.530 "write_zeroes": true, 00:08:25.530 "zcopy": true, 00:08:25.530 "get_zone_info": false, 00:08:25.530 "zone_management": false, 00:08:25.530 "zone_append": false, 00:08:25.530 "compare": false, 00:08:25.530 "compare_and_write": false, 00:08:25.530 "abort": true, 00:08:25.530 "seek_hole": false, 00:08:25.530 "seek_data": false, 00:08:25.530 "copy": true, 00:08:25.530 "nvme_iov_md": false 00:08:25.530 }, 00:08:25.530 "memory_domains": [ 00:08:25.530 { 00:08:25.530 "dma_device_id": "system", 00:08:25.530 "dma_device_type": 1 00:08:25.530 }, 00:08:25.530 { 00:08:25.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.530 "dma_device_type": 2 00:08:25.530 } 00:08:25.530 ], 00:08:25.530 "driver_specific": {} 00:08:25.530 } 00:08:25.530 ] 00:08:25.530 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.530 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:25.530 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:25.530 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:25.530 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:25.530 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.530 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:25.530 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.530 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.530 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.530 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.530 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.530 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.530 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.530 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.530 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.530 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.530 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.530 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.789 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.789 "name": "Existed_Raid", 00:08:25.789 "uuid": "bd114ca4-c8f5-444f-b509-ca57611cd7e2", 00:08:25.789 "strip_size_kb": 64, 00:08:25.789 "state": "online", 00:08:25.789 "raid_level": "raid0", 00:08:25.789 "superblock": false, 00:08:25.789 "num_base_bdevs": 3, 00:08:25.789 "num_base_bdevs_discovered": 3, 00:08:25.789 "num_base_bdevs_operational": 3, 00:08:25.789 "base_bdevs_list": [ 00:08:25.789 { 00:08:25.789 "name": "BaseBdev1", 00:08:25.789 "uuid": "5fe158aa-42c4-4cce-9ab6-c23132576df0", 00:08:25.789 "is_configured": true, 00:08:25.789 "data_offset": 0, 00:08:25.789 "data_size": 65536 00:08:25.789 }, 00:08:25.789 { 00:08:25.789 "name": "BaseBdev2", 00:08:25.789 "uuid": "54f7131c-83b3-4f8b-8101-2ba3453e38d6", 00:08:25.789 "is_configured": true, 00:08:25.789 "data_offset": 0, 00:08:25.789 "data_size": 65536 00:08:25.789 }, 00:08:25.789 { 00:08:25.789 "name": "BaseBdev3", 00:08:25.789 "uuid": "e9bd7b34-315a-4077-a48c-af61fefb0a20", 00:08:25.789 "is_configured": true, 00:08:25.789 "data_offset": 0, 00:08:25.789 "data_size": 65536 00:08:25.789 } 00:08:25.789 ] 00:08:25.789 }' 00:08:25.789 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.789 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.048 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:26.048 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:26.048 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:26.048 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:26.048 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:26.048 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:26.048 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:26.048 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:26.048 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.048 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.048 [2024-11-18 03:08:29.555146] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:26.048 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.048 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:26.048 "name": "Existed_Raid", 00:08:26.048 "aliases": [ 00:08:26.048 "bd114ca4-c8f5-444f-b509-ca57611cd7e2" 00:08:26.048 ], 00:08:26.048 "product_name": "Raid Volume", 00:08:26.048 "block_size": 512, 00:08:26.048 "num_blocks": 196608, 00:08:26.048 "uuid": "bd114ca4-c8f5-444f-b509-ca57611cd7e2", 00:08:26.048 "assigned_rate_limits": { 00:08:26.048 "rw_ios_per_sec": 0, 00:08:26.048 "rw_mbytes_per_sec": 0, 00:08:26.048 "r_mbytes_per_sec": 0, 00:08:26.048 "w_mbytes_per_sec": 0 00:08:26.048 }, 00:08:26.048 "claimed": false, 00:08:26.048 "zoned": false, 00:08:26.048 "supported_io_types": { 00:08:26.048 "read": true, 00:08:26.048 "write": true, 00:08:26.048 "unmap": true, 00:08:26.048 "flush": true, 00:08:26.048 "reset": true, 00:08:26.048 "nvme_admin": false, 00:08:26.048 "nvme_io": false, 00:08:26.048 "nvme_io_md": false, 00:08:26.048 "write_zeroes": true, 00:08:26.048 "zcopy": false, 00:08:26.048 "get_zone_info": false, 00:08:26.048 "zone_management": false, 00:08:26.048 "zone_append": false, 00:08:26.048 "compare": false, 00:08:26.048 "compare_and_write": false, 00:08:26.048 "abort": false, 00:08:26.048 "seek_hole": false, 00:08:26.048 "seek_data": false, 00:08:26.048 "copy": false, 00:08:26.048 "nvme_iov_md": false 00:08:26.048 }, 00:08:26.048 "memory_domains": [ 00:08:26.048 { 00:08:26.048 "dma_device_id": "system", 00:08:26.048 "dma_device_type": 1 00:08:26.048 }, 00:08:26.048 { 00:08:26.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.048 "dma_device_type": 2 00:08:26.048 }, 00:08:26.048 { 00:08:26.048 "dma_device_id": "system", 00:08:26.048 "dma_device_type": 1 00:08:26.048 }, 00:08:26.048 { 00:08:26.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.048 "dma_device_type": 2 00:08:26.048 }, 00:08:26.048 { 00:08:26.048 "dma_device_id": "system", 00:08:26.048 "dma_device_type": 1 00:08:26.048 }, 00:08:26.048 { 00:08:26.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.048 "dma_device_type": 2 00:08:26.048 } 00:08:26.049 ], 00:08:26.049 "driver_specific": { 00:08:26.049 "raid": { 00:08:26.049 "uuid": "bd114ca4-c8f5-444f-b509-ca57611cd7e2", 00:08:26.049 "strip_size_kb": 64, 00:08:26.049 "state": "online", 00:08:26.049 "raid_level": "raid0", 00:08:26.049 "superblock": false, 00:08:26.049 "num_base_bdevs": 3, 00:08:26.049 "num_base_bdevs_discovered": 3, 00:08:26.049 "num_base_bdevs_operational": 3, 00:08:26.049 "base_bdevs_list": [ 00:08:26.049 { 00:08:26.049 "name": "BaseBdev1", 00:08:26.049 "uuid": "5fe158aa-42c4-4cce-9ab6-c23132576df0", 00:08:26.049 "is_configured": true, 00:08:26.049 "data_offset": 0, 00:08:26.049 "data_size": 65536 00:08:26.049 }, 00:08:26.049 { 00:08:26.049 "name": "BaseBdev2", 00:08:26.049 "uuid": "54f7131c-83b3-4f8b-8101-2ba3453e38d6", 00:08:26.049 "is_configured": true, 00:08:26.049 "data_offset": 0, 00:08:26.049 "data_size": 65536 00:08:26.049 }, 00:08:26.049 { 00:08:26.049 "name": "BaseBdev3", 00:08:26.049 "uuid": "e9bd7b34-315a-4077-a48c-af61fefb0a20", 00:08:26.049 "is_configured": true, 00:08:26.049 "data_offset": 0, 00:08:26.049 "data_size": 65536 00:08:26.049 } 00:08:26.049 ] 00:08:26.049 } 00:08:26.049 } 00:08:26.049 }' 00:08:26.049 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:26.049 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:26.049 BaseBdev2 00:08:26.049 BaseBdev3' 00:08:26.335 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.335 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:26.335 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.335 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:26.335 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.335 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.335 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.335 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.335 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.335 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.335 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.335 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:26.335 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.335 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.335 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.335 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.335 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.335 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.335 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.335 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:26.335 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.335 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.336 [2024-11-18 03:08:29.822454] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:26.336 [2024-11-18 03:08:29.822534] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:26.336 [2024-11-18 03:08:29.822616] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.336 "name": "Existed_Raid", 00:08:26.336 "uuid": "bd114ca4-c8f5-444f-b509-ca57611cd7e2", 00:08:26.336 "strip_size_kb": 64, 00:08:26.336 "state": "offline", 00:08:26.336 "raid_level": "raid0", 00:08:26.336 "superblock": false, 00:08:26.336 "num_base_bdevs": 3, 00:08:26.336 "num_base_bdevs_discovered": 2, 00:08:26.336 "num_base_bdevs_operational": 2, 00:08:26.336 "base_bdevs_list": [ 00:08:26.336 { 00:08:26.336 "name": null, 00:08:26.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.336 "is_configured": false, 00:08:26.336 "data_offset": 0, 00:08:26.336 "data_size": 65536 00:08:26.336 }, 00:08:26.336 { 00:08:26.336 "name": "BaseBdev2", 00:08:26.336 "uuid": "54f7131c-83b3-4f8b-8101-2ba3453e38d6", 00:08:26.336 "is_configured": true, 00:08:26.336 "data_offset": 0, 00:08:26.336 "data_size": 65536 00:08:26.336 }, 00:08:26.336 { 00:08:26.336 "name": "BaseBdev3", 00:08:26.336 "uuid": "e9bd7b34-315a-4077-a48c-af61fefb0a20", 00:08:26.336 "is_configured": true, 00:08:26.336 "data_offset": 0, 00:08:26.336 "data_size": 65536 00:08:26.336 } 00:08:26.336 ] 00:08:26.336 }' 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.336 03:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.906 [2024-11-18 03:08:30.289228] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.906 [2024-11-18 03:08:30.360609] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:26.906 [2024-11-18 03:08:30.360744] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.906 BaseBdev2 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.906 [ 00:08:26.906 { 00:08:26.906 "name": "BaseBdev2", 00:08:26.906 "aliases": [ 00:08:26.906 "9c91022a-119e-4b81-b194-8160294e02c4" 00:08:26.906 ], 00:08:26.906 "product_name": "Malloc disk", 00:08:26.906 "block_size": 512, 00:08:26.906 "num_blocks": 65536, 00:08:26.906 "uuid": "9c91022a-119e-4b81-b194-8160294e02c4", 00:08:26.906 "assigned_rate_limits": { 00:08:26.906 "rw_ios_per_sec": 0, 00:08:26.906 "rw_mbytes_per_sec": 0, 00:08:26.906 "r_mbytes_per_sec": 0, 00:08:26.906 "w_mbytes_per_sec": 0 00:08:26.906 }, 00:08:26.906 "claimed": false, 00:08:26.906 "zoned": false, 00:08:26.906 "supported_io_types": { 00:08:26.906 "read": true, 00:08:26.906 "write": true, 00:08:26.906 "unmap": true, 00:08:26.906 "flush": true, 00:08:26.906 "reset": true, 00:08:26.906 "nvme_admin": false, 00:08:26.906 "nvme_io": false, 00:08:26.906 "nvme_io_md": false, 00:08:26.906 "write_zeroes": true, 00:08:26.906 "zcopy": true, 00:08:26.906 "get_zone_info": false, 00:08:26.906 "zone_management": false, 00:08:26.906 "zone_append": false, 00:08:26.906 "compare": false, 00:08:26.906 "compare_and_write": false, 00:08:26.906 "abort": true, 00:08:26.906 "seek_hole": false, 00:08:26.906 "seek_data": false, 00:08:26.906 "copy": true, 00:08:26.906 "nvme_iov_md": false 00:08:26.906 }, 00:08:26.906 "memory_domains": [ 00:08:26.906 { 00:08:26.906 "dma_device_id": "system", 00:08:26.906 "dma_device_type": 1 00:08:26.906 }, 00:08:26.906 { 00:08:26.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.906 "dma_device_type": 2 00:08:26.906 } 00:08:26.906 ], 00:08:26.906 "driver_specific": {} 00:08:26.906 } 00:08:26.906 ] 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:26.906 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.166 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.166 BaseBdev3 00:08:27.166 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.166 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:27.166 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:27.166 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:27.166 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:27.166 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:27.166 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.167 [ 00:08:27.167 { 00:08:27.167 "name": "BaseBdev3", 00:08:27.167 "aliases": [ 00:08:27.167 "08c80c25-145a-46d6-992d-f3eea281fad3" 00:08:27.167 ], 00:08:27.167 "product_name": "Malloc disk", 00:08:27.167 "block_size": 512, 00:08:27.167 "num_blocks": 65536, 00:08:27.167 "uuid": "08c80c25-145a-46d6-992d-f3eea281fad3", 00:08:27.167 "assigned_rate_limits": { 00:08:27.167 "rw_ios_per_sec": 0, 00:08:27.167 "rw_mbytes_per_sec": 0, 00:08:27.167 "r_mbytes_per_sec": 0, 00:08:27.167 "w_mbytes_per_sec": 0 00:08:27.167 }, 00:08:27.167 "claimed": false, 00:08:27.167 "zoned": false, 00:08:27.167 "supported_io_types": { 00:08:27.167 "read": true, 00:08:27.167 "write": true, 00:08:27.167 "unmap": true, 00:08:27.167 "flush": true, 00:08:27.167 "reset": true, 00:08:27.167 "nvme_admin": false, 00:08:27.167 "nvme_io": false, 00:08:27.167 "nvme_io_md": false, 00:08:27.167 "write_zeroes": true, 00:08:27.167 "zcopy": true, 00:08:27.167 "get_zone_info": false, 00:08:27.167 "zone_management": false, 00:08:27.167 "zone_append": false, 00:08:27.167 "compare": false, 00:08:27.167 "compare_and_write": false, 00:08:27.167 "abort": true, 00:08:27.167 "seek_hole": false, 00:08:27.167 "seek_data": false, 00:08:27.167 "copy": true, 00:08:27.167 "nvme_iov_md": false 00:08:27.167 }, 00:08:27.167 "memory_domains": [ 00:08:27.167 { 00:08:27.167 "dma_device_id": "system", 00:08:27.167 "dma_device_type": 1 00:08:27.167 }, 00:08:27.167 { 00:08:27.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.167 "dma_device_type": 2 00:08:27.167 } 00:08:27.167 ], 00:08:27.167 "driver_specific": {} 00:08:27.167 } 00:08:27.167 ] 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.167 [2024-11-18 03:08:30.542170] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:27.167 [2024-11-18 03:08:30.542305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:27.167 [2024-11-18 03:08:30.542353] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:27.167 [2024-11-18 03:08:30.544329] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.167 "name": "Existed_Raid", 00:08:27.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.167 "strip_size_kb": 64, 00:08:27.167 "state": "configuring", 00:08:27.167 "raid_level": "raid0", 00:08:27.167 "superblock": false, 00:08:27.167 "num_base_bdevs": 3, 00:08:27.167 "num_base_bdevs_discovered": 2, 00:08:27.167 "num_base_bdevs_operational": 3, 00:08:27.167 "base_bdevs_list": [ 00:08:27.167 { 00:08:27.167 "name": "BaseBdev1", 00:08:27.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.167 "is_configured": false, 00:08:27.167 "data_offset": 0, 00:08:27.167 "data_size": 0 00:08:27.167 }, 00:08:27.167 { 00:08:27.167 "name": "BaseBdev2", 00:08:27.167 "uuid": "9c91022a-119e-4b81-b194-8160294e02c4", 00:08:27.167 "is_configured": true, 00:08:27.167 "data_offset": 0, 00:08:27.167 "data_size": 65536 00:08:27.167 }, 00:08:27.167 { 00:08:27.167 "name": "BaseBdev3", 00:08:27.167 "uuid": "08c80c25-145a-46d6-992d-f3eea281fad3", 00:08:27.167 "is_configured": true, 00:08:27.167 "data_offset": 0, 00:08:27.167 "data_size": 65536 00:08:27.167 } 00:08:27.167 ] 00:08:27.167 }' 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.167 03:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.736 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:27.736 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.736 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.736 [2024-11-18 03:08:31.013291] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:27.736 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.736 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:27.736 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.736 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.736 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.736 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.736 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.736 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.736 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.736 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.736 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.736 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.736 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.736 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.736 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.736 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.736 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.736 "name": "Existed_Raid", 00:08:27.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.736 "strip_size_kb": 64, 00:08:27.736 "state": "configuring", 00:08:27.736 "raid_level": "raid0", 00:08:27.736 "superblock": false, 00:08:27.736 "num_base_bdevs": 3, 00:08:27.736 "num_base_bdevs_discovered": 1, 00:08:27.736 "num_base_bdevs_operational": 3, 00:08:27.736 "base_bdevs_list": [ 00:08:27.736 { 00:08:27.736 "name": "BaseBdev1", 00:08:27.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.736 "is_configured": false, 00:08:27.736 "data_offset": 0, 00:08:27.736 "data_size": 0 00:08:27.736 }, 00:08:27.736 { 00:08:27.736 "name": null, 00:08:27.736 "uuid": "9c91022a-119e-4b81-b194-8160294e02c4", 00:08:27.736 "is_configured": false, 00:08:27.736 "data_offset": 0, 00:08:27.736 "data_size": 65536 00:08:27.736 }, 00:08:27.736 { 00:08:27.736 "name": "BaseBdev3", 00:08:27.736 "uuid": "08c80c25-145a-46d6-992d-f3eea281fad3", 00:08:27.737 "is_configured": true, 00:08:27.737 "data_offset": 0, 00:08:27.737 "data_size": 65536 00:08:27.737 } 00:08:27.737 ] 00:08:27.737 }' 00:08:27.737 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.737 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.996 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.996 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.996 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.996 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:27.996 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.996 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:27.996 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:27.996 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.996 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.996 [2024-11-18 03:08:31.511665] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:27.996 BaseBdev1 00:08:27.996 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.996 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:27.996 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:27.996 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:27.996 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:27.996 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:27.996 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:27.996 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:27.996 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.996 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.996 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.996 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:27.996 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.996 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.996 [ 00:08:27.996 { 00:08:27.996 "name": "BaseBdev1", 00:08:27.996 "aliases": [ 00:08:27.996 "6f44d4be-04cd-4693-88da-cdba4cf626f2" 00:08:27.996 ], 00:08:27.996 "product_name": "Malloc disk", 00:08:27.996 "block_size": 512, 00:08:27.996 "num_blocks": 65536, 00:08:27.996 "uuid": "6f44d4be-04cd-4693-88da-cdba4cf626f2", 00:08:27.996 "assigned_rate_limits": { 00:08:27.996 "rw_ios_per_sec": 0, 00:08:27.996 "rw_mbytes_per_sec": 0, 00:08:27.996 "r_mbytes_per_sec": 0, 00:08:27.996 "w_mbytes_per_sec": 0 00:08:27.996 }, 00:08:27.996 "claimed": true, 00:08:27.996 "claim_type": "exclusive_write", 00:08:27.996 "zoned": false, 00:08:27.996 "supported_io_types": { 00:08:27.996 "read": true, 00:08:27.996 "write": true, 00:08:27.996 "unmap": true, 00:08:27.996 "flush": true, 00:08:27.996 "reset": true, 00:08:27.996 "nvme_admin": false, 00:08:27.996 "nvme_io": false, 00:08:27.996 "nvme_io_md": false, 00:08:27.996 "write_zeroes": true, 00:08:27.996 "zcopy": true, 00:08:27.996 "get_zone_info": false, 00:08:27.996 "zone_management": false, 00:08:27.996 "zone_append": false, 00:08:27.996 "compare": false, 00:08:27.996 "compare_and_write": false, 00:08:27.996 "abort": true, 00:08:27.996 "seek_hole": false, 00:08:27.997 "seek_data": false, 00:08:27.997 "copy": true, 00:08:27.997 "nvme_iov_md": false 00:08:27.997 }, 00:08:27.997 "memory_domains": [ 00:08:27.997 { 00:08:27.997 "dma_device_id": "system", 00:08:27.997 "dma_device_type": 1 00:08:27.997 }, 00:08:27.997 { 00:08:27.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.997 "dma_device_type": 2 00:08:27.997 } 00:08:27.997 ], 00:08:27.997 "driver_specific": {} 00:08:27.997 } 00:08:27.997 ] 00:08:27.997 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.997 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:27.997 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:27.997 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.997 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.997 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.997 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.997 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.997 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.997 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.997 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.997 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.997 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.997 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.997 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.997 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.256 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.256 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.256 "name": "Existed_Raid", 00:08:28.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.256 "strip_size_kb": 64, 00:08:28.256 "state": "configuring", 00:08:28.256 "raid_level": "raid0", 00:08:28.256 "superblock": false, 00:08:28.256 "num_base_bdevs": 3, 00:08:28.256 "num_base_bdevs_discovered": 2, 00:08:28.256 "num_base_bdevs_operational": 3, 00:08:28.256 "base_bdevs_list": [ 00:08:28.256 { 00:08:28.256 "name": "BaseBdev1", 00:08:28.256 "uuid": "6f44d4be-04cd-4693-88da-cdba4cf626f2", 00:08:28.256 "is_configured": true, 00:08:28.256 "data_offset": 0, 00:08:28.256 "data_size": 65536 00:08:28.256 }, 00:08:28.256 { 00:08:28.256 "name": null, 00:08:28.256 "uuid": "9c91022a-119e-4b81-b194-8160294e02c4", 00:08:28.256 "is_configured": false, 00:08:28.256 "data_offset": 0, 00:08:28.256 "data_size": 65536 00:08:28.256 }, 00:08:28.256 { 00:08:28.256 "name": "BaseBdev3", 00:08:28.256 "uuid": "08c80c25-145a-46d6-992d-f3eea281fad3", 00:08:28.256 "is_configured": true, 00:08:28.256 "data_offset": 0, 00:08:28.256 "data_size": 65536 00:08:28.256 } 00:08:28.256 ] 00:08:28.256 }' 00:08:28.256 03:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.256 03:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.516 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.516 03:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.516 03:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.516 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:28.516 03:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.516 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:28.516 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:28.516 03:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.517 03:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.517 [2024-11-18 03:08:32.046846] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:28.517 03:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.517 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:28.517 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.517 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.517 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.517 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.517 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.517 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.517 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.517 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.517 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.517 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.517 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.517 03:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.517 03:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.517 03:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.776 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.776 "name": "Existed_Raid", 00:08:28.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.776 "strip_size_kb": 64, 00:08:28.776 "state": "configuring", 00:08:28.776 "raid_level": "raid0", 00:08:28.776 "superblock": false, 00:08:28.776 "num_base_bdevs": 3, 00:08:28.776 "num_base_bdevs_discovered": 1, 00:08:28.776 "num_base_bdevs_operational": 3, 00:08:28.776 "base_bdevs_list": [ 00:08:28.776 { 00:08:28.776 "name": "BaseBdev1", 00:08:28.776 "uuid": "6f44d4be-04cd-4693-88da-cdba4cf626f2", 00:08:28.776 "is_configured": true, 00:08:28.776 "data_offset": 0, 00:08:28.776 "data_size": 65536 00:08:28.776 }, 00:08:28.776 { 00:08:28.776 "name": null, 00:08:28.776 "uuid": "9c91022a-119e-4b81-b194-8160294e02c4", 00:08:28.776 "is_configured": false, 00:08:28.776 "data_offset": 0, 00:08:28.776 "data_size": 65536 00:08:28.776 }, 00:08:28.776 { 00:08:28.776 "name": null, 00:08:28.776 "uuid": "08c80c25-145a-46d6-992d-f3eea281fad3", 00:08:28.776 "is_configured": false, 00:08:28.776 "data_offset": 0, 00:08:28.776 "data_size": 65536 00:08:28.776 } 00:08:28.776 ] 00:08:28.776 }' 00:08:28.776 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.776 03:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.036 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.036 03:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.036 03:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.036 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:29.036 03:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.036 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:29.036 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:29.036 03:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.036 03:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.036 [2024-11-18 03:08:32.534077] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:29.036 03:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.036 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:29.036 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.036 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.036 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.036 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.036 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.036 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.036 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.036 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.036 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.036 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.036 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.036 03:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.036 03:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.036 03:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.036 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.036 "name": "Existed_Raid", 00:08:29.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.036 "strip_size_kb": 64, 00:08:29.036 "state": "configuring", 00:08:29.036 "raid_level": "raid0", 00:08:29.036 "superblock": false, 00:08:29.036 "num_base_bdevs": 3, 00:08:29.036 "num_base_bdevs_discovered": 2, 00:08:29.036 "num_base_bdevs_operational": 3, 00:08:29.036 "base_bdevs_list": [ 00:08:29.036 { 00:08:29.036 "name": "BaseBdev1", 00:08:29.036 "uuid": "6f44d4be-04cd-4693-88da-cdba4cf626f2", 00:08:29.036 "is_configured": true, 00:08:29.036 "data_offset": 0, 00:08:29.036 "data_size": 65536 00:08:29.036 }, 00:08:29.036 { 00:08:29.036 "name": null, 00:08:29.036 "uuid": "9c91022a-119e-4b81-b194-8160294e02c4", 00:08:29.036 "is_configured": false, 00:08:29.036 "data_offset": 0, 00:08:29.036 "data_size": 65536 00:08:29.036 }, 00:08:29.036 { 00:08:29.036 "name": "BaseBdev3", 00:08:29.036 "uuid": "08c80c25-145a-46d6-992d-f3eea281fad3", 00:08:29.036 "is_configured": true, 00:08:29.036 "data_offset": 0, 00:08:29.036 "data_size": 65536 00:08:29.036 } 00:08:29.036 ] 00:08:29.036 }' 00:08:29.036 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.036 03:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.605 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:29.605 03:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.605 03:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.605 03:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.605 03:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.605 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:29.605 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:29.605 03:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.605 03:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.605 [2024-11-18 03:08:33.021200] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:29.605 03:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.605 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:29.605 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.605 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.605 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.605 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.605 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.605 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.605 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.605 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.605 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.605 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.605 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.605 03:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.605 03:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.605 03:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.605 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.605 "name": "Existed_Raid", 00:08:29.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.605 "strip_size_kb": 64, 00:08:29.605 "state": "configuring", 00:08:29.605 "raid_level": "raid0", 00:08:29.605 "superblock": false, 00:08:29.605 "num_base_bdevs": 3, 00:08:29.605 "num_base_bdevs_discovered": 1, 00:08:29.605 "num_base_bdevs_operational": 3, 00:08:29.605 "base_bdevs_list": [ 00:08:29.605 { 00:08:29.605 "name": null, 00:08:29.605 "uuid": "6f44d4be-04cd-4693-88da-cdba4cf626f2", 00:08:29.605 "is_configured": false, 00:08:29.605 "data_offset": 0, 00:08:29.605 "data_size": 65536 00:08:29.605 }, 00:08:29.605 { 00:08:29.605 "name": null, 00:08:29.605 "uuid": "9c91022a-119e-4b81-b194-8160294e02c4", 00:08:29.605 "is_configured": false, 00:08:29.605 "data_offset": 0, 00:08:29.605 "data_size": 65536 00:08:29.605 }, 00:08:29.605 { 00:08:29.605 "name": "BaseBdev3", 00:08:29.605 "uuid": "08c80c25-145a-46d6-992d-f3eea281fad3", 00:08:29.605 "is_configured": true, 00:08:29.605 "data_offset": 0, 00:08:29.605 "data_size": 65536 00:08:29.605 } 00:08:29.605 ] 00:08:29.605 }' 00:08:29.605 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.605 03:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.174 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.174 03:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.174 03:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.174 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:30.174 03:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.174 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:30.174 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:30.174 03:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.174 03:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.174 [2024-11-18 03:08:33.518998] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:30.174 03:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.174 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:30.174 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.174 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.174 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.174 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.174 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.174 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.174 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.174 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.174 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.174 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.174 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.174 03:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.174 03:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.174 03:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.174 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.174 "name": "Existed_Raid", 00:08:30.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.175 "strip_size_kb": 64, 00:08:30.175 "state": "configuring", 00:08:30.175 "raid_level": "raid0", 00:08:30.175 "superblock": false, 00:08:30.175 "num_base_bdevs": 3, 00:08:30.175 "num_base_bdevs_discovered": 2, 00:08:30.175 "num_base_bdevs_operational": 3, 00:08:30.175 "base_bdevs_list": [ 00:08:30.175 { 00:08:30.175 "name": null, 00:08:30.175 "uuid": "6f44d4be-04cd-4693-88da-cdba4cf626f2", 00:08:30.175 "is_configured": false, 00:08:30.175 "data_offset": 0, 00:08:30.175 "data_size": 65536 00:08:30.175 }, 00:08:30.175 { 00:08:30.175 "name": "BaseBdev2", 00:08:30.175 "uuid": "9c91022a-119e-4b81-b194-8160294e02c4", 00:08:30.175 "is_configured": true, 00:08:30.175 "data_offset": 0, 00:08:30.175 "data_size": 65536 00:08:30.175 }, 00:08:30.175 { 00:08:30.175 "name": "BaseBdev3", 00:08:30.175 "uuid": "08c80c25-145a-46d6-992d-f3eea281fad3", 00:08:30.175 "is_configured": true, 00:08:30.175 "data_offset": 0, 00:08:30.175 "data_size": 65536 00:08:30.175 } 00:08:30.175 ] 00:08:30.175 }' 00:08:30.175 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.175 03:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.434 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.434 03:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.434 03:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.434 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:30.434 03:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.434 03:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:30.434 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:30.434 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.434 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.434 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.694 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.694 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6f44d4be-04cd-4693-88da-cdba4cf626f2 00:08:30.694 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.694 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.694 [2024-11-18 03:08:34.061169] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:30.694 [2024-11-18 03:08:34.061291] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:30.694 [2024-11-18 03:08:34.061320] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:30.694 [2024-11-18 03:08:34.061622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:30.694 [2024-11-18 03:08:34.061785] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:30.694 [2024-11-18 03:08:34.061831] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:30.694 [2024-11-18 03:08:34.062096] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.694 NewBaseBdev 00:08:30.694 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.694 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:30.694 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:30.694 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:30.694 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:30.694 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:30.694 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:30.694 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:30.694 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.694 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.694 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.694 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:30.694 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.694 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.694 [ 00:08:30.694 { 00:08:30.694 "name": "NewBaseBdev", 00:08:30.694 "aliases": [ 00:08:30.694 "6f44d4be-04cd-4693-88da-cdba4cf626f2" 00:08:30.694 ], 00:08:30.694 "product_name": "Malloc disk", 00:08:30.694 "block_size": 512, 00:08:30.694 "num_blocks": 65536, 00:08:30.694 "uuid": "6f44d4be-04cd-4693-88da-cdba4cf626f2", 00:08:30.694 "assigned_rate_limits": { 00:08:30.694 "rw_ios_per_sec": 0, 00:08:30.694 "rw_mbytes_per_sec": 0, 00:08:30.694 "r_mbytes_per_sec": 0, 00:08:30.694 "w_mbytes_per_sec": 0 00:08:30.694 }, 00:08:30.694 "claimed": true, 00:08:30.694 "claim_type": "exclusive_write", 00:08:30.695 "zoned": false, 00:08:30.695 "supported_io_types": { 00:08:30.695 "read": true, 00:08:30.695 "write": true, 00:08:30.695 "unmap": true, 00:08:30.695 "flush": true, 00:08:30.695 "reset": true, 00:08:30.695 "nvme_admin": false, 00:08:30.695 "nvme_io": false, 00:08:30.695 "nvme_io_md": false, 00:08:30.695 "write_zeroes": true, 00:08:30.695 "zcopy": true, 00:08:30.695 "get_zone_info": false, 00:08:30.695 "zone_management": false, 00:08:30.695 "zone_append": false, 00:08:30.695 "compare": false, 00:08:30.695 "compare_and_write": false, 00:08:30.695 "abort": true, 00:08:30.695 "seek_hole": false, 00:08:30.695 "seek_data": false, 00:08:30.695 "copy": true, 00:08:30.695 "nvme_iov_md": false 00:08:30.695 }, 00:08:30.695 "memory_domains": [ 00:08:30.695 { 00:08:30.695 "dma_device_id": "system", 00:08:30.695 "dma_device_type": 1 00:08:30.695 }, 00:08:30.695 { 00:08:30.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.695 "dma_device_type": 2 00:08:30.695 } 00:08:30.695 ], 00:08:30.695 "driver_specific": {} 00:08:30.695 } 00:08:30.695 ] 00:08:30.695 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.695 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:30.695 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:30.695 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.695 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:30.695 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.695 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.695 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.695 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.695 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.695 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.695 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.695 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.695 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.695 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.695 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.695 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.695 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.695 "name": "Existed_Raid", 00:08:30.695 "uuid": "05827475-3f06-4a52-a340-f55585d49780", 00:08:30.695 "strip_size_kb": 64, 00:08:30.695 "state": "online", 00:08:30.695 "raid_level": "raid0", 00:08:30.695 "superblock": false, 00:08:30.695 "num_base_bdevs": 3, 00:08:30.695 "num_base_bdevs_discovered": 3, 00:08:30.695 "num_base_bdevs_operational": 3, 00:08:30.695 "base_bdevs_list": [ 00:08:30.695 { 00:08:30.695 "name": "NewBaseBdev", 00:08:30.695 "uuid": "6f44d4be-04cd-4693-88da-cdba4cf626f2", 00:08:30.695 "is_configured": true, 00:08:30.695 "data_offset": 0, 00:08:30.695 "data_size": 65536 00:08:30.695 }, 00:08:30.695 { 00:08:30.695 "name": "BaseBdev2", 00:08:30.695 "uuid": "9c91022a-119e-4b81-b194-8160294e02c4", 00:08:30.695 "is_configured": true, 00:08:30.695 "data_offset": 0, 00:08:30.695 "data_size": 65536 00:08:30.695 }, 00:08:30.695 { 00:08:30.695 "name": "BaseBdev3", 00:08:30.695 "uuid": "08c80c25-145a-46d6-992d-f3eea281fad3", 00:08:30.695 "is_configured": true, 00:08:30.695 "data_offset": 0, 00:08:30.695 "data_size": 65536 00:08:30.695 } 00:08:30.695 ] 00:08:30.695 }' 00:08:30.695 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.695 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.265 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:31.265 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:31.265 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:31.265 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:31.265 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:31.265 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:31.265 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:31.265 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.265 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.265 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:31.265 [2024-11-18 03:08:34.564667] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:31.265 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.265 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:31.265 "name": "Existed_Raid", 00:08:31.265 "aliases": [ 00:08:31.265 "05827475-3f06-4a52-a340-f55585d49780" 00:08:31.265 ], 00:08:31.265 "product_name": "Raid Volume", 00:08:31.265 "block_size": 512, 00:08:31.265 "num_blocks": 196608, 00:08:31.265 "uuid": "05827475-3f06-4a52-a340-f55585d49780", 00:08:31.265 "assigned_rate_limits": { 00:08:31.265 "rw_ios_per_sec": 0, 00:08:31.265 "rw_mbytes_per_sec": 0, 00:08:31.265 "r_mbytes_per_sec": 0, 00:08:31.265 "w_mbytes_per_sec": 0 00:08:31.265 }, 00:08:31.265 "claimed": false, 00:08:31.265 "zoned": false, 00:08:31.265 "supported_io_types": { 00:08:31.265 "read": true, 00:08:31.265 "write": true, 00:08:31.265 "unmap": true, 00:08:31.265 "flush": true, 00:08:31.265 "reset": true, 00:08:31.265 "nvme_admin": false, 00:08:31.265 "nvme_io": false, 00:08:31.265 "nvme_io_md": false, 00:08:31.265 "write_zeroes": true, 00:08:31.265 "zcopy": false, 00:08:31.265 "get_zone_info": false, 00:08:31.265 "zone_management": false, 00:08:31.265 "zone_append": false, 00:08:31.265 "compare": false, 00:08:31.265 "compare_and_write": false, 00:08:31.265 "abort": false, 00:08:31.265 "seek_hole": false, 00:08:31.265 "seek_data": false, 00:08:31.265 "copy": false, 00:08:31.265 "nvme_iov_md": false 00:08:31.265 }, 00:08:31.265 "memory_domains": [ 00:08:31.265 { 00:08:31.265 "dma_device_id": "system", 00:08:31.265 "dma_device_type": 1 00:08:31.265 }, 00:08:31.265 { 00:08:31.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.265 "dma_device_type": 2 00:08:31.266 }, 00:08:31.266 { 00:08:31.266 "dma_device_id": "system", 00:08:31.266 "dma_device_type": 1 00:08:31.266 }, 00:08:31.266 { 00:08:31.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.266 "dma_device_type": 2 00:08:31.266 }, 00:08:31.266 { 00:08:31.266 "dma_device_id": "system", 00:08:31.266 "dma_device_type": 1 00:08:31.266 }, 00:08:31.266 { 00:08:31.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.266 "dma_device_type": 2 00:08:31.266 } 00:08:31.266 ], 00:08:31.266 "driver_specific": { 00:08:31.266 "raid": { 00:08:31.266 "uuid": "05827475-3f06-4a52-a340-f55585d49780", 00:08:31.266 "strip_size_kb": 64, 00:08:31.266 "state": "online", 00:08:31.266 "raid_level": "raid0", 00:08:31.266 "superblock": false, 00:08:31.266 "num_base_bdevs": 3, 00:08:31.266 "num_base_bdevs_discovered": 3, 00:08:31.266 "num_base_bdevs_operational": 3, 00:08:31.266 "base_bdevs_list": [ 00:08:31.266 { 00:08:31.266 "name": "NewBaseBdev", 00:08:31.266 "uuid": "6f44d4be-04cd-4693-88da-cdba4cf626f2", 00:08:31.266 "is_configured": true, 00:08:31.266 "data_offset": 0, 00:08:31.266 "data_size": 65536 00:08:31.266 }, 00:08:31.266 { 00:08:31.266 "name": "BaseBdev2", 00:08:31.266 "uuid": "9c91022a-119e-4b81-b194-8160294e02c4", 00:08:31.266 "is_configured": true, 00:08:31.266 "data_offset": 0, 00:08:31.266 "data_size": 65536 00:08:31.266 }, 00:08:31.266 { 00:08:31.266 "name": "BaseBdev3", 00:08:31.266 "uuid": "08c80c25-145a-46d6-992d-f3eea281fad3", 00:08:31.266 "is_configured": true, 00:08:31.266 "data_offset": 0, 00:08:31.266 "data_size": 65536 00:08:31.266 } 00:08:31.266 ] 00:08:31.266 } 00:08:31.266 } 00:08:31.266 }' 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:31.266 BaseBdev2 00:08:31.266 BaseBdev3' 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.266 [2024-11-18 03:08:34.803970] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:31.266 [2024-11-18 03:08:34.804053] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:31.266 [2024-11-18 03:08:34.804161] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:31.266 [2024-11-18 03:08:34.804237] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:31.266 [2024-11-18 03:08:34.804298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 75176 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 75176 ']' 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 75176 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:31.266 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75176 00:08:31.526 killing process with pid 75176 00:08:31.526 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:31.526 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:31.526 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75176' 00:08:31.526 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 75176 00:08:31.526 [2024-11-18 03:08:34.852737] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:31.526 03:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 75176 00:08:31.526 [2024-11-18 03:08:34.883881] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:31.786 00:08:31.786 real 0m8.980s 00:08:31.786 user 0m15.384s 00:08:31.786 sys 0m1.780s 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.786 ************************************ 00:08:31.786 END TEST raid_state_function_test 00:08:31.786 ************************************ 00:08:31.786 03:08:35 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:31.786 03:08:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:31.786 03:08:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.786 03:08:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:31.786 ************************************ 00:08:31.786 START TEST raid_state_function_test_sb 00:08:31.786 ************************************ 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75786 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75786' 00:08:31.786 Process raid pid: 75786 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75786 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 75786 ']' 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:31.786 03:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.786 [2024-11-18 03:08:35.287088] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:31.786 [2024-11-18 03:08:35.287334] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.046 [2024-11-18 03:08:35.449821] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.046 [2024-11-18 03:08:35.500347] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.046 [2024-11-18 03:08:35.542265] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.046 [2024-11-18 03:08:35.542303] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.615 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:32.615 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:32.615 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:32.615 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.615 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.615 [2024-11-18 03:08:36.155818] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:32.615 [2024-11-18 03:08:36.155877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:32.615 [2024-11-18 03:08:36.155892] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:32.615 [2024-11-18 03:08:36.155904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:32.615 [2024-11-18 03:08:36.155911] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:32.615 [2024-11-18 03:08:36.155923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:32.615 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.615 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:32.615 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.615 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.615 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.615 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.615 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.615 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.615 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.615 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.615 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.615 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.615 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.615 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.615 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.615 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.875 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.875 "name": "Existed_Raid", 00:08:32.875 "uuid": "5b44ad41-b586-49c9-9b5a-13d2106a8222", 00:08:32.875 "strip_size_kb": 64, 00:08:32.875 "state": "configuring", 00:08:32.875 "raid_level": "raid0", 00:08:32.875 "superblock": true, 00:08:32.875 "num_base_bdevs": 3, 00:08:32.875 "num_base_bdevs_discovered": 0, 00:08:32.875 "num_base_bdevs_operational": 3, 00:08:32.875 "base_bdevs_list": [ 00:08:32.875 { 00:08:32.875 "name": "BaseBdev1", 00:08:32.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.875 "is_configured": false, 00:08:32.875 "data_offset": 0, 00:08:32.875 "data_size": 0 00:08:32.875 }, 00:08:32.875 { 00:08:32.875 "name": "BaseBdev2", 00:08:32.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.875 "is_configured": false, 00:08:32.875 "data_offset": 0, 00:08:32.875 "data_size": 0 00:08:32.875 }, 00:08:32.875 { 00:08:32.875 "name": "BaseBdev3", 00:08:32.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.875 "is_configured": false, 00:08:32.875 "data_offset": 0, 00:08:32.875 "data_size": 0 00:08:32.875 } 00:08:32.875 ] 00:08:32.875 }' 00:08:32.875 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.875 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.136 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:33.136 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.136 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.136 [2024-11-18 03:08:36.563024] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:33.136 [2024-11-18 03:08:36.563074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:33.136 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.136 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:33.136 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.136 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.136 [2024-11-18 03:08:36.575061] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:33.136 [2024-11-18 03:08:36.575105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:33.136 [2024-11-18 03:08:36.575114] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:33.136 [2024-11-18 03:08:36.575124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:33.136 [2024-11-18 03:08:36.575130] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:33.136 [2024-11-18 03:08:36.575139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:33.136 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.136 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:33.136 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.136 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.136 [2024-11-18 03:08:36.596052] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:33.136 BaseBdev1 00:08:33.136 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.136 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:33.136 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:33.136 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:33.136 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:33.136 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:33.136 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:33.136 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:33.136 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.136 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.136 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.136 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:33.136 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.136 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.136 [ 00:08:33.136 { 00:08:33.136 "name": "BaseBdev1", 00:08:33.136 "aliases": [ 00:08:33.136 "12a28f2b-0fa0-44d3-b258-bf4ba87192d2" 00:08:33.136 ], 00:08:33.136 "product_name": "Malloc disk", 00:08:33.136 "block_size": 512, 00:08:33.136 "num_blocks": 65536, 00:08:33.136 "uuid": "12a28f2b-0fa0-44d3-b258-bf4ba87192d2", 00:08:33.136 "assigned_rate_limits": { 00:08:33.136 "rw_ios_per_sec": 0, 00:08:33.136 "rw_mbytes_per_sec": 0, 00:08:33.136 "r_mbytes_per_sec": 0, 00:08:33.136 "w_mbytes_per_sec": 0 00:08:33.136 }, 00:08:33.136 "claimed": true, 00:08:33.136 "claim_type": "exclusive_write", 00:08:33.136 "zoned": false, 00:08:33.136 "supported_io_types": { 00:08:33.136 "read": true, 00:08:33.136 "write": true, 00:08:33.136 "unmap": true, 00:08:33.136 "flush": true, 00:08:33.136 "reset": true, 00:08:33.136 "nvme_admin": false, 00:08:33.136 "nvme_io": false, 00:08:33.136 "nvme_io_md": false, 00:08:33.136 "write_zeroes": true, 00:08:33.136 "zcopy": true, 00:08:33.136 "get_zone_info": false, 00:08:33.136 "zone_management": false, 00:08:33.136 "zone_append": false, 00:08:33.136 "compare": false, 00:08:33.136 "compare_and_write": false, 00:08:33.136 "abort": true, 00:08:33.136 "seek_hole": false, 00:08:33.136 "seek_data": false, 00:08:33.136 "copy": true, 00:08:33.136 "nvme_iov_md": false 00:08:33.136 }, 00:08:33.136 "memory_domains": [ 00:08:33.136 { 00:08:33.136 "dma_device_id": "system", 00:08:33.136 "dma_device_type": 1 00:08:33.136 }, 00:08:33.136 { 00:08:33.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.137 "dma_device_type": 2 00:08:33.137 } 00:08:33.137 ], 00:08:33.137 "driver_specific": {} 00:08:33.137 } 00:08:33.137 ] 00:08:33.137 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.137 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:33.137 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:33.137 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.137 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.137 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.137 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.137 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.137 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.137 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.137 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.137 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.137 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.137 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.137 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.137 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.137 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.137 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.137 "name": "Existed_Raid", 00:08:33.137 "uuid": "187d9d14-0f16-423c-b0dc-be707179ba28", 00:08:33.137 "strip_size_kb": 64, 00:08:33.137 "state": "configuring", 00:08:33.137 "raid_level": "raid0", 00:08:33.137 "superblock": true, 00:08:33.137 "num_base_bdevs": 3, 00:08:33.137 "num_base_bdevs_discovered": 1, 00:08:33.137 "num_base_bdevs_operational": 3, 00:08:33.137 "base_bdevs_list": [ 00:08:33.137 { 00:08:33.137 "name": "BaseBdev1", 00:08:33.137 "uuid": "12a28f2b-0fa0-44d3-b258-bf4ba87192d2", 00:08:33.137 "is_configured": true, 00:08:33.137 "data_offset": 2048, 00:08:33.137 "data_size": 63488 00:08:33.137 }, 00:08:33.137 { 00:08:33.137 "name": "BaseBdev2", 00:08:33.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.137 "is_configured": false, 00:08:33.137 "data_offset": 0, 00:08:33.137 "data_size": 0 00:08:33.137 }, 00:08:33.137 { 00:08:33.137 "name": "BaseBdev3", 00:08:33.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.137 "is_configured": false, 00:08:33.137 "data_offset": 0, 00:08:33.137 "data_size": 0 00:08:33.137 } 00:08:33.137 ] 00:08:33.137 }' 00:08:33.137 03:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.137 03:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.707 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:33.707 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.707 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.707 [2024-11-18 03:08:37.119283] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:33.707 [2024-11-18 03:08:37.119353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:33.707 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.707 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:33.707 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.707 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.707 [2024-11-18 03:08:37.131305] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:33.707 [2024-11-18 03:08:37.133366] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:33.707 [2024-11-18 03:08:37.133452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:33.707 [2024-11-18 03:08:37.133482] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:33.707 [2024-11-18 03:08:37.133506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:33.707 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.707 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:33.707 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:33.707 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:33.707 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.707 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.707 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.707 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.707 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.707 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.707 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.707 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.707 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.707 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.707 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.707 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.707 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.707 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.707 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.707 "name": "Existed_Raid", 00:08:33.707 "uuid": "71fbf882-3ad1-4aae-9c77-092218ea8dba", 00:08:33.707 "strip_size_kb": 64, 00:08:33.707 "state": "configuring", 00:08:33.707 "raid_level": "raid0", 00:08:33.707 "superblock": true, 00:08:33.707 "num_base_bdevs": 3, 00:08:33.707 "num_base_bdevs_discovered": 1, 00:08:33.707 "num_base_bdevs_operational": 3, 00:08:33.707 "base_bdevs_list": [ 00:08:33.707 { 00:08:33.707 "name": "BaseBdev1", 00:08:33.707 "uuid": "12a28f2b-0fa0-44d3-b258-bf4ba87192d2", 00:08:33.707 "is_configured": true, 00:08:33.707 "data_offset": 2048, 00:08:33.707 "data_size": 63488 00:08:33.707 }, 00:08:33.707 { 00:08:33.707 "name": "BaseBdev2", 00:08:33.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.707 "is_configured": false, 00:08:33.707 "data_offset": 0, 00:08:33.707 "data_size": 0 00:08:33.707 }, 00:08:33.707 { 00:08:33.707 "name": "BaseBdev3", 00:08:33.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.707 "is_configured": false, 00:08:33.707 "data_offset": 0, 00:08:33.707 "data_size": 0 00:08:33.707 } 00:08:33.707 ] 00:08:33.707 }' 00:08:33.707 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.707 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.277 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:34.277 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.277 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.277 [2024-11-18 03:08:37.622560] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:34.277 BaseBdev2 00:08:34.277 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.277 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:34.277 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:34.277 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.278 [ 00:08:34.278 { 00:08:34.278 "name": "BaseBdev2", 00:08:34.278 "aliases": [ 00:08:34.278 "d2b1414c-0e91-4586-a047-1f5984a4387f" 00:08:34.278 ], 00:08:34.278 "product_name": "Malloc disk", 00:08:34.278 "block_size": 512, 00:08:34.278 "num_blocks": 65536, 00:08:34.278 "uuid": "d2b1414c-0e91-4586-a047-1f5984a4387f", 00:08:34.278 "assigned_rate_limits": { 00:08:34.278 "rw_ios_per_sec": 0, 00:08:34.278 "rw_mbytes_per_sec": 0, 00:08:34.278 "r_mbytes_per_sec": 0, 00:08:34.278 "w_mbytes_per_sec": 0 00:08:34.278 }, 00:08:34.278 "claimed": true, 00:08:34.278 "claim_type": "exclusive_write", 00:08:34.278 "zoned": false, 00:08:34.278 "supported_io_types": { 00:08:34.278 "read": true, 00:08:34.278 "write": true, 00:08:34.278 "unmap": true, 00:08:34.278 "flush": true, 00:08:34.278 "reset": true, 00:08:34.278 "nvme_admin": false, 00:08:34.278 "nvme_io": false, 00:08:34.278 "nvme_io_md": false, 00:08:34.278 "write_zeroes": true, 00:08:34.278 "zcopy": true, 00:08:34.278 "get_zone_info": false, 00:08:34.278 "zone_management": false, 00:08:34.278 "zone_append": false, 00:08:34.278 "compare": false, 00:08:34.278 "compare_and_write": false, 00:08:34.278 "abort": true, 00:08:34.278 "seek_hole": false, 00:08:34.278 "seek_data": false, 00:08:34.278 "copy": true, 00:08:34.278 "nvme_iov_md": false 00:08:34.278 }, 00:08:34.278 "memory_domains": [ 00:08:34.278 { 00:08:34.278 "dma_device_id": "system", 00:08:34.278 "dma_device_type": 1 00:08:34.278 }, 00:08:34.278 { 00:08:34.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.278 "dma_device_type": 2 00:08:34.278 } 00:08:34.278 ], 00:08:34.278 "driver_specific": {} 00:08:34.278 } 00:08:34.278 ] 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.278 "name": "Existed_Raid", 00:08:34.278 "uuid": "71fbf882-3ad1-4aae-9c77-092218ea8dba", 00:08:34.278 "strip_size_kb": 64, 00:08:34.278 "state": "configuring", 00:08:34.278 "raid_level": "raid0", 00:08:34.278 "superblock": true, 00:08:34.278 "num_base_bdevs": 3, 00:08:34.278 "num_base_bdevs_discovered": 2, 00:08:34.278 "num_base_bdevs_operational": 3, 00:08:34.278 "base_bdevs_list": [ 00:08:34.278 { 00:08:34.278 "name": "BaseBdev1", 00:08:34.278 "uuid": "12a28f2b-0fa0-44d3-b258-bf4ba87192d2", 00:08:34.278 "is_configured": true, 00:08:34.278 "data_offset": 2048, 00:08:34.278 "data_size": 63488 00:08:34.278 }, 00:08:34.278 { 00:08:34.278 "name": "BaseBdev2", 00:08:34.278 "uuid": "d2b1414c-0e91-4586-a047-1f5984a4387f", 00:08:34.278 "is_configured": true, 00:08:34.278 "data_offset": 2048, 00:08:34.278 "data_size": 63488 00:08:34.278 }, 00:08:34.278 { 00:08:34.278 "name": "BaseBdev3", 00:08:34.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.278 "is_configured": false, 00:08:34.278 "data_offset": 0, 00:08:34.278 "data_size": 0 00:08:34.278 } 00:08:34.278 ] 00:08:34.278 }' 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.278 03:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.544 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:34.544 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.544 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.544 [2024-11-18 03:08:38.093023] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:34.544 [2024-11-18 03:08:38.093344] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:34.544 [2024-11-18 03:08:38.093395] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:34.544 [2024-11-18 03:08:38.093749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:34.544 BaseBdev3 00:08:34.544 [2024-11-18 03:08:38.093918] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:34.544 [2024-11-18 03:08:38.093987] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:34.544 [2024-11-18 03:08:38.094166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.545 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.545 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:34.545 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:34.545 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:34.545 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:34.545 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:34.545 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:34.545 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:34.545 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.545 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.545 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.545 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:34.545 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.545 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.824 [ 00:08:34.824 { 00:08:34.824 "name": "BaseBdev3", 00:08:34.824 "aliases": [ 00:08:34.824 "51cb1ca1-3b10-4063-9717-160d59de4ac3" 00:08:34.824 ], 00:08:34.824 "product_name": "Malloc disk", 00:08:34.824 "block_size": 512, 00:08:34.824 "num_blocks": 65536, 00:08:34.824 "uuid": "51cb1ca1-3b10-4063-9717-160d59de4ac3", 00:08:34.824 "assigned_rate_limits": { 00:08:34.824 "rw_ios_per_sec": 0, 00:08:34.824 "rw_mbytes_per_sec": 0, 00:08:34.824 "r_mbytes_per_sec": 0, 00:08:34.824 "w_mbytes_per_sec": 0 00:08:34.824 }, 00:08:34.824 "claimed": true, 00:08:34.824 "claim_type": "exclusive_write", 00:08:34.824 "zoned": false, 00:08:34.824 "supported_io_types": { 00:08:34.824 "read": true, 00:08:34.824 "write": true, 00:08:34.824 "unmap": true, 00:08:34.824 "flush": true, 00:08:34.824 "reset": true, 00:08:34.824 "nvme_admin": false, 00:08:34.824 "nvme_io": false, 00:08:34.824 "nvme_io_md": false, 00:08:34.824 "write_zeroes": true, 00:08:34.824 "zcopy": true, 00:08:34.824 "get_zone_info": false, 00:08:34.824 "zone_management": false, 00:08:34.824 "zone_append": false, 00:08:34.824 "compare": false, 00:08:34.824 "compare_and_write": false, 00:08:34.824 "abort": true, 00:08:34.824 "seek_hole": false, 00:08:34.824 "seek_data": false, 00:08:34.824 "copy": true, 00:08:34.824 "nvme_iov_md": false 00:08:34.824 }, 00:08:34.824 "memory_domains": [ 00:08:34.824 { 00:08:34.824 "dma_device_id": "system", 00:08:34.824 "dma_device_type": 1 00:08:34.824 }, 00:08:34.824 { 00:08:34.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.824 "dma_device_type": 2 00:08:34.824 } 00:08:34.824 ], 00:08:34.824 "driver_specific": {} 00:08:34.824 } 00:08:34.824 ] 00:08:34.824 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.824 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:34.824 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:34.824 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:34.824 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:34.824 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.824 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:34.824 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.824 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.824 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.824 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.824 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.824 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.824 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.824 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.824 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.824 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.824 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.824 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.824 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.824 "name": "Existed_Raid", 00:08:34.824 "uuid": "71fbf882-3ad1-4aae-9c77-092218ea8dba", 00:08:34.824 "strip_size_kb": 64, 00:08:34.824 "state": "online", 00:08:34.824 "raid_level": "raid0", 00:08:34.824 "superblock": true, 00:08:34.824 "num_base_bdevs": 3, 00:08:34.824 "num_base_bdevs_discovered": 3, 00:08:34.824 "num_base_bdevs_operational": 3, 00:08:34.824 "base_bdevs_list": [ 00:08:34.824 { 00:08:34.824 "name": "BaseBdev1", 00:08:34.824 "uuid": "12a28f2b-0fa0-44d3-b258-bf4ba87192d2", 00:08:34.824 "is_configured": true, 00:08:34.824 "data_offset": 2048, 00:08:34.824 "data_size": 63488 00:08:34.824 }, 00:08:34.824 { 00:08:34.824 "name": "BaseBdev2", 00:08:34.824 "uuid": "d2b1414c-0e91-4586-a047-1f5984a4387f", 00:08:34.824 "is_configured": true, 00:08:34.824 "data_offset": 2048, 00:08:34.824 "data_size": 63488 00:08:34.824 }, 00:08:34.824 { 00:08:34.824 "name": "BaseBdev3", 00:08:34.824 "uuid": "51cb1ca1-3b10-4063-9717-160d59de4ac3", 00:08:34.824 "is_configured": true, 00:08:34.824 "data_offset": 2048, 00:08:34.824 "data_size": 63488 00:08:34.824 } 00:08:34.824 ] 00:08:34.824 }' 00:08:34.824 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.825 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.085 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:35.085 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:35.085 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:35.085 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:35.085 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:35.085 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:35.085 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:35.085 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:35.085 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.085 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.085 [2024-11-18 03:08:38.572605] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.085 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.085 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:35.085 "name": "Existed_Raid", 00:08:35.085 "aliases": [ 00:08:35.085 "71fbf882-3ad1-4aae-9c77-092218ea8dba" 00:08:35.085 ], 00:08:35.085 "product_name": "Raid Volume", 00:08:35.085 "block_size": 512, 00:08:35.085 "num_blocks": 190464, 00:08:35.085 "uuid": "71fbf882-3ad1-4aae-9c77-092218ea8dba", 00:08:35.085 "assigned_rate_limits": { 00:08:35.085 "rw_ios_per_sec": 0, 00:08:35.085 "rw_mbytes_per_sec": 0, 00:08:35.085 "r_mbytes_per_sec": 0, 00:08:35.085 "w_mbytes_per_sec": 0 00:08:35.085 }, 00:08:35.085 "claimed": false, 00:08:35.085 "zoned": false, 00:08:35.085 "supported_io_types": { 00:08:35.085 "read": true, 00:08:35.085 "write": true, 00:08:35.085 "unmap": true, 00:08:35.085 "flush": true, 00:08:35.085 "reset": true, 00:08:35.085 "nvme_admin": false, 00:08:35.085 "nvme_io": false, 00:08:35.085 "nvme_io_md": false, 00:08:35.085 "write_zeroes": true, 00:08:35.085 "zcopy": false, 00:08:35.085 "get_zone_info": false, 00:08:35.085 "zone_management": false, 00:08:35.085 "zone_append": false, 00:08:35.085 "compare": false, 00:08:35.085 "compare_and_write": false, 00:08:35.085 "abort": false, 00:08:35.085 "seek_hole": false, 00:08:35.085 "seek_data": false, 00:08:35.085 "copy": false, 00:08:35.085 "nvme_iov_md": false 00:08:35.085 }, 00:08:35.085 "memory_domains": [ 00:08:35.085 { 00:08:35.085 "dma_device_id": "system", 00:08:35.085 "dma_device_type": 1 00:08:35.085 }, 00:08:35.085 { 00:08:35.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.085 "dma_device_type": 2 00:08:35.085 }, 00:08:35.085 { 00:08:35.085 "dma_device_id": "system", 00:08:35.085 "dma_device_type": 1 00:08:35.085 }, 00:08:35.085 { 00:08:35.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.085 "dma_device_type": 2 00:08:35.085 }, 00:08:35.085 { 00:08:35.085 "dma_device_id": "system", 00:08:35.085 "dma_device_type": 1 00:08:35.085 }, 00:08:35.085 { 00:08:35.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.085 "dma_device_type": 2 00:08:35.085 } 00:08:35.085 ], 00:08:35.085 "driver_specific": { 00:08:35.085 "raid": { 00:08:35.085 "uuid": "71fbf882-3ad1-4aae-9c77-092218ea8dba", 00:08:35.085 "strip_size_kb": 64, 00:08:35.085 "state": "online", 00:08:35.085 "raid_level": "raid0", 00:08:35.085 "superblock": true, 00:08:35.085 "num_base_bdevs": 3, 00:08:35.085 "num_base_bdevs_discovered": 3, 00:08:35.085 "num_base_bdevs_operational": 3, 00:08:35.085 "base_bdevs_list": [ 00:08:35.085 { 00:08:35.085 "name": "BaseBdev1", 00:08:35.085 "uuid": "12a28f2b-0fa0-44d3-b258-bf4ba87192d2", 00:08:35.085 "is_configured": true, 00:08:35.085 "data_offset": 2048, 00:08:35.085 "data_size": 63488 00:08:35.085 }, 00:08:35.085 { 00:08:35.085 "name": "BaseBdev2", 00:08:35.085 "uuid": "d2b1414c-0e91-4586-a047-1f5984a4387f", 00:08:35.085 "is_configured": true, 00:08:35.085 "data_offset": 2048, 00:08:35.085 "data_size": 63488 00:08:35.085 }, 00:08:35.085 { 00:08:35.085 "name": "BaseBdev3", 00:08:35.085 "uuid": "51cb1ca1-3b10-4063-9717-160d59de4ac3", 00:08:35.085 "is_configured": true, 00:08:35.085 "data_offset": 2048, 00:08:35.085 "data_size": 63488 00:08:35.085 } 00:08:35.085 ] 00:08:35.085 } 00:08:35.085 } 00:08:35.085 }' 00:08:35.085 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:35.085 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:35.085 BaseBdev2 00:08:35.085 BaseBdev3' 00:08:35.085 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.346 [2024-11-18 03:08:38.855874] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:35.346 [2024-11-18 03:08:38.855968] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:35.346 [2024-11-18 03:08:38.856078] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.346 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.346 "name": "Existed_Raid", 00:08:35.346 "uuid": "71fbf882-3ad1-4aae-9c77-092218ea8dba", 00:08:35.346 "strip_size_kb": 64, 00:08:35.346 "state": "offline", 00:08:35.346 "raid_level": "raid0", 00:08:35.346 "superblock": true, 00:08:35.346 "num_base_bdevs": 3, 00:08:35.346 "num_base_bdevs_discovered": 2, 00:08:35.346 "num_base_bdevs_operational": 2, 00:08:35.346 "base_bdevs_list": [ 00:08:35.347 { 00:08:35.347 "name": null, 00:08:35.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.347 "is_configured": false, 00:08:35.347 "data_offset": 0, 00:08:35.347 "data_size": 63488 00:08:35.347 }, 00:08:35.347 { 00:08:35.347 "name": "BaseBdev2", 00:08:35.347 "uuid": "d2b1414c-0e91-4586-a047-1f5984a4387f", 00:08:35.347 "is_configured": true, 00:08:35.347 "data_offset": 2048, 00:08:35.347 "data_size": 63488 00:08:35.347 }, 00:08:35.347 { 00:08:35.347 "name": "BaseBdev3", 00:08:35.347 "uuid": "51cb1ca1-3b10-4063-9717-160d59de4ac3", 00:08:35.347 "is_configured": true, 00:08:35.347 "data_offset": 2048, 00:08:35.347 "data_size": 63488 00:08:35.347 } 00:08:35.347 ] 00:08:35.347 }' 00:08:35.606 03:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.606 03:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.865 [2024-11-18 03:08:39.346748] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.865 [2024-11-18 03:08:39.402257] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:35.865 [2024-11-18 03:08:39.402315] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:35.865 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.126 BaseBdev2 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.126 [ 00:08:36.126 { 00:08:36.126 "name": "BaseBdev2", 00:08:36.126 "aliases": [ 00:08:36.126 "45dd8bcb-bb61-4e6e-946d-979cef6787a5" 00:08:36.126 ], 00:08:36.126 "product_name": "Malloc disk", 00:08:36.126 "block_size": 512, 00:08:36.126 "num_blocks": 65536, 00:08:36.126 "uuid": "45dd8bcb-bb61-4e6e-946d-979cef6787a5", 00:08:36.126 "assigned_rate_limits": { 00:08:36.126 "rw_ios_per_sec": 0, 00:08:36.126 "rw_mbytes_per_sec": 0, 00:08:36.126 "r_mbytes_per_sec": 0, 00:08:36.126 "w_mbytes_per_sec": 0 00:08:36.126 }, 00:08:36.126 "claimed": false, 00:08:36.126 "zoned": false, 00:08:36.126 "supported_io_types": { 00:08:36.126 "read": true, 00:08:36.126 "write": true, 00:08:36.126 "unmap": true, 00:08:36.126 "flush": true, 00:08:36.126 "reset": true, 00:08:36.126 "nvme_admin": false, 00:08:36.126 "nvme_io": false, 00:08:36.126 "nvme_io_md": false, 00:08:36.126 "write_zeroes": true, 00:08:36.126 "zcopy": true, 00:08:36.126 "get_zone_info": false, 00:08:36.126 "zone_management": false, 00:08:36.126 "zone_append": false, 00:08:36.126 "compare": false, 00:08:36.126 "compare_and_write": false, 00:08:36.126 "abort": true, 00:08:36.126 "seek_hole": false, 00:08:36.126 "seek_data": false, 00:08:36.126 "copy": true, 00:08:36.126 "nvme_iov_md": false 00:08:36.126 }, 00:08:36.126 "memory_domains": [ 00:08:36.126 { 00:08:36.126 "dma_device_id": "system", 00:08:36.126 "dma_device_type": 1 00:08:36.126 }, 00:08:36.126 { 00:08:36.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.126 "dma_device_type": 2 00:08:36.126 } 00:08:36.126 ], 00:08:36.126 "driver_specific": {} 00:08:36.126 } 00:08:36.126 ] 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.126 BaseBdev3 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.126 [ 00:08:36.126 { 00:08:36.126 "name": "BaseBdev3", 00:08:36.126 "aliases": [ 00:08:36.126 "6b526e45-c862-4217-acef-0f049c8d9764" 00:08:36.126 ], 00:08:36.126 "product_name": "Malloc disk", 00:08:36.126 "block_size": 512, 00:08:36.126 "num_blocks": 65536, 00:08:36.126 "uuid": "6b526e45-c862-4217-acef-0f049c8d9764", 00:08:36.126 "assigned_rate_limits": { 00:08:36.126 "rw_ios_per_sec": 0, 00:08:36.126 "rw_mbytes_per_sec": 0, 00:08:36.126 "r_mbytes_per_sec": 0, 00:08:36.126 "w_mbytes_per_sec": 0 00:08:36.126 }, 00:08:36.126 "claimed": false, 00:08:36.126 "zoned": false, 00:08:36.126 "supported_io_types": { 00:08:36.126 "read": true, 00:08:36.126 "write": true, 00:08:36.126 "unmap": true, 00:08:36.126 "flush": true, 00:08:36.126 "reset": true, 00:08:36.126 "nvme_admin": false, 00:08:36.126 "nvme_io": false, 00:08:36.126 "nvme_io_md": false, 00:08:36.126 "write_zeroes": true, 00:08:36.126 "zcopy": true, 00:08:36.126 "get_zone_info": false, 00:08:36.126 "zone_management": false, 00:08:36.126 "zone_append": false, 00:08:36.126 "compare": false, 00:08:36.126 "compare_and_write": false, 00:08:36.126 "abort": true, 00:08:36.126 "seek_hole": false, 00:08:36.126 "seek_data": false, 00:08:36.126 "copy": true, 00:08:36.126 "nvme_iov_md": false 00:08:36.126 }, 00:08:36.126 "memory_domains": [ 00:08:36.126 { 00:08:36.126 "dma_device_id": "system", 00:08:36.126 "dma_device_type": 1 00:08:36.126 }, 00:08:36.126 { 00:08:36.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.126 "dma_device_type": 2 00:08:36.126 } 00:08:36.126 ], 00:08:36.126 "driver_specific": {} 00:08:36.126 } 00:08:36.126 ] 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.126 [2024-11-18 03:08:39.567531] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:36.126 [2024-11-18 03:08:39.567575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:36.126 [2024-11-18 03:08:39.567597] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:36.126 [2024-11-18 03:08:39.569548] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.126 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.127 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.127 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.127 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.127 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.127 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.127 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.127 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.127 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.127 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.127 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.127 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.127 "name": "Existed_Raid", 00:08:36.127 "uuid": "f4fd4f69-8bf0-455e-8ab5-3d15de4d9fcd", 00:08:36.127 "strip_size_kb": 64, 00:08:36.127 "state": "configuring", 00:08:36.127 "raid_level": "raid0", 00:08:36.127 "superblock": true, 00:08:36.127 "num_base_bdevs": 3, 00:08:36.127 "num_base_bdevs_discovered": 2, 00:08:36.127 "num_base_bdevs_operational": 3, 00:08:36.127 "base_bdevs_list": [ 00:08:36.127 { 00:08:36.127 "name": "BaseBdev1", 00:08:36.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.127 "is_configured": false, 00:08:36.127 "data_offset": 0, 00:08:36.127 "data_size": 0 00:08:36.127 }, 00:08:36.127 { 00:08:36.127 "name": "BaseBdev2", 00:08:36.127 "uuid": "45dd8bcb-bb61-4e6e-946d-979cef6787a5", 00:08:36.127 "is_configured": true, 00:08:36.127 "data_offset": 2048, 00:08:36.127 "data_size": 63488 00:08:36.127 }, 00:08:36.127 { 00:08:36.127 "name": "BaseBdev3", 00:08:36.127 "uuid": "6b526e45-c862-4217-acef-0f049c8d9764", 00:08:36.127 "is_configured": true, 00:08:36.127 "data_offset": 2048, 00:08:36.127 "data_size": 63488 00:08:36.127 } 00:08:36.127 ] 00:08:36.127 }' 00:08:36.127 03:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.127 03:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.697 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:36.697 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.697 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.697 [2024-11-18 03:08:40.026725] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:36.697 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.697 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:36.697 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.697 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.697 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.697 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.697 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.697 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.697 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.697 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.697 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.697 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.697 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.697 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.697 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.697 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.697 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.697 "name": "Existed_Raid", 00:08:36.697 "uuid": "f4fd4f69-8bf0-455e-8ab5-3d15de4d9fcd", 00:08:36.697 "strip_size_kb": 64, 00:08:36.697 "state": "configuring", 00:08:36.697 "raid_level": "raid0", 00:08:36.697 "superblock": true, 00:08:36.697 "num_base_bdevs": 3, 00:08:36.697 "num_base_bdevs_discovered": 1, 00:08:36.697 "num_base_bdevs_operational": 3, 00:08:36.697 "base_bdevs_list": [ 00:08:36.698 { 00:08:36.698 "name": "BaseBdev1", 00:08:36.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.698 "is_configured": false, 00:08:36.698 "data_offset": 0, 00:08:36.698 "data_size": 0 00:08:36.698 }, 00:08:36.698 { 00:08:36.698 "name": null, 00:08:36.698 "uuid": "45dd8bcb-bb61-4e6e-946d-979cef6787a5", 00:08:36.698 "is_configured": false, 00:08:36.698 "data_offset": 0, 00:08:36.698 "data_size": 63488 00:08:36.698 }, 00:08:36.698 { 00:08:36.698 "name": "BaseBdev3", 00:08:36.698 "uuid": "6b526e45-c862-4217-acef-0f049c8d9764", 00:08:36.698 "is_configured": true, 00:08:36.698 "data_offset": 2048, 00:08:36.698 "data_size": 63488 00:08:36.698 } 00:08:36.698 ] 00:08:36.698 }' 00:08:36.698 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.698 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.957 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.957 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.957 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.957 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:36.957 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.957 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:36.957 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:36.957 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.957 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.958 [2024-11-18 03:08:40.509099] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:36.958 BaseBdev1 00:08:36.958 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.958 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:36.958 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:36.958 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:36.958 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:36.958 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:36.958 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:36.958 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:36.958 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.958 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.958 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.958 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:36.958 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.958 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.216 [ 00:08:37.216 { 00:08:37.216 "name": "BaseBdev1", 00:08:37.216 "aliases": [ 00:08:37.216 "99b7b32a-b1c2-4364-b79c-8324e361ffe6" 00:08:37.216 ], 00:08:37.216 "product_name": "Malloc disk", 00:08:37.216 "block_size": 512, 00:08:37.216 "num_blocks": 65536, 00:08:37.216 "uuid": "99b7b32a-b1c2-4364-b79c-8324e361ffe6", 00:08:37.216 "assigned_rate_limits": { 00:08:37.216 "rw_ios_per_sec": 0, 00:08:37.216 "rw_mbytes_per_sec": 0, 00:08:37.216 "r_mbytes_per_sec": 0, 00:08:37.216 "w_mbytes_per_sec": 0 00:08:37.216 }, 00:08:37.216 "claimed": true, 00:08:37.216 "claim_type": "exclusive_write", 00:08:37.216 "zoned": false, 00:08:37.216 "supported_io_types": { 00:08:37.216 "read": true, 00:08:37.216 "write": true, 00:08:37.216 "unmap": true, 00:08:37.216 "flush": true, 00:08:37.216 "reset": true, 00:08:37.216 "nvme_admin": false, 00:08:37.216 "nvme_io": false, 00:08:37.216 "nvme_io_md": false, 00:08:37.216 "write_zeroes": true, 00:08:37.216 "zcopy": true, 00:08:37.216 "get_zone_info": false, 00:08:37.216 "zone_management": false, 00:08:37.216 "zone_append": false, 00:08:37.216 "compare": false, 00:08:37.216 "compare_and_write": false, 00:08:37.216 "abort": true, 00:08:37.216 "seek_hole": false, 00:08:37.216 "seek_data": false, 00:08:37.216 "copy": true, 00:08:37.216 "nvme_iov_md": false 00:08:37.216 }, 00:08:37.216 "memory_domains": [ 00:08:37.216 { 00:08:37.216 "dma_device_id": "system", 00:08:37.216 "dma_device_type": 1 00:08:37.216 }, 00:08:37.216 { 00:08:37.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.216 "dma_device_type": 2 00:08:37.216 } 00:08:37.216 ], 00:08:37.216 "driver_specific": {} 00:08:37.216 } 00:08:37.216 ] 00:08:37.216 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.216 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:37.216 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:37.216 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.216 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.216 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.216 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.216 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.216 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.216 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.216 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.216 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.216 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.216 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.216 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.216 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.216 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.216 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.216 "name": "Existed_Raid", 00:08:37.216 "uuid": "f4fd4f69-8bf0-455e-8ab5-3d15de4d9fcd", 00:08:37.216 "strip_size_kb": 64, 00:08:37.216 "state": "configuring", 00:08:37.216 "raid_level": "raid0", 00:08:37.216 "superblock": true, 00:08:37.216 "num_base_bdevs": 3, 00:08:37.216 "num_base_bdevs_discovered": 2, 00:08:37.216 "num_base_bdevs_operational": 3, 00:08:37.216 "base_bdevs_list": [ 00:08:37.216 { 00:08:37.216 "name": "BaseBdev1", 00:08:37.216 "uuid": "99b7b32a-b1c2-4364-b79c-8324e361ffe6", 00:08:37.216 "is_configured": true, 00:08:37.216 "data_offset": 2048, 00:08:37.216 "data_size": 63488 00:08:37.216 }, 00:08:37.216 { 00:08:37.216 "name": null, 00:08:37.216 "uuid": "45dd8bcb-bb61-4e6e-946d-979cef6787a5", 00:08:37.216 "is_configured": false, 00:08:37.216 "data_offset": 0, 00:08:37.216 "data_size": 63488 00:08:37.216 }, 00:08:37.216 { 00:08:37.216 "name": "BaseBdev3", 00:08:37.216 "uuid": "6b526e45-c862-4217-acef-0f049c8d9764", 00:08:37.216 "is_configured": true, 00:08:37.216 "data_offset": 2048, 00:08:37.216 "data_size": 63488 00:08:37.216 } 00:08:37.216 ] 00:08:37.216 }' 00:08:37.216 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.216 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.475 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.475 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.475 03:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.475 03:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:37.475 03:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.475 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:37.475 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:37.475 03:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.475 03:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.475 [2024-11-18 03:08:41.048264] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:37.734 03:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.734 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:37.734 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.734 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.734 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.734 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.734 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.734 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.734 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.734 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.734 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.734 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.734 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.734 03:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.734 03:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.734 03:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.734 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.734 "name": "Existed_Raid", 00:08:37.734 "uuid": "f4fd4f69-8bf0-455e-8ab5-3d15de4d9fcd", 00:08:37.734 "strip_size_kb": 64, 00:08:37.734 "state": "configuring", 00:08:37.734 "raid_level": "raid0", 00:08:37.734 "superblock": true, 00:08:37.734 "num_base_bdevs": 3, 00:08:37.734 "num_base_bdevs_discovered": 1, 00:08:37.734 "num_base_bdevs_operational": 3, 00:08:37.734 "base_bdevs_list": [ 00:08:37.734 { 00:08:37.734 "name": "BaseBdev1", 00:08:37.734 "uuid": "99b7b32a-b1c2-4364-b79c-8324e361ffe6", 00:08:37.734 "is_configured": true, 00:08:37.734 "data_offset": 2048, 00:08:37.734 "data_size": 63488 00:08:37.734 }, 00:08:37.734 { 00:08:37.734 "name": null, 00:08:37.734 "uuid": "45dd8bcb-bb61-4e6e-946d-979cef6787a5", 00:08:37.734 "is_configured": false, 00:08:37.734 "data_offset": 0, 00:08:37.734 "data_size": 63488 00:08:37.734 }, 00:08:37.734 { 00:08:37.734 "name": null, 00:08:37.734 "uuid": "6b526e45-c862-4217-acef-0f049c8d9764", 00:08:37.734 "is_configured": false, 00:08:37.734 "data_offset": 0, 00:08:37.734 "data_size": 63488 00:08:37.734 } 00:08:37.734 ] 00:08:37.734 }' 00:08:37.734 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.734 03:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.993 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.993 03:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.993 03:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.993 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:37.993 03:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.993 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:37.993 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:37.993 03:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.993 03:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.993 [2024-11-18 03:08:41.535468] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:37.993 03:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.993 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:37.993 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.993 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.993 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.993 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.993 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.993 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.993 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.993 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.993 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.993 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.993 03:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.993 03:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.993 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.993 03:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.252 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.252 "name": "Existed_Raid", 00:08:38.252 "uuid": "f4fd4f69-8bf0-455e-8ab5-3d15de4d9fcd", 00:08:38.252 "strip_size_kb": 64, 00:08:38.252 "state": "configuring", 00:08:38.252 "raid_level": "raid0", 00:08:38.252 "superblock": true, 00:08:38.252 "num_base_bdevs": 3, 00:08:38.252 "num_base_bdevs_discovered": 2, 00:08:38.252 "num_base_bdevs_operational": 3, 00:08:38.252 "base_bdevs_list": [ 00:08:38.252 { 00:08:38.252 "name": "BaseBdev1", 00:08:38.252 "uuid": "99b7b32a-b1c2-4364-b79c-8324e361ffe6", 00:08:38.252 "is_configured": true, 00:08:38.252 "data_offset": 2048, 00:08:38.252 "data_size": 63488 00:08:38.252 }, 00:08:38.252 { 00:08:38.252 "name": null, 00:08:38.252 "uuid": "45dd8bcb-bb61-4e6e-946d-979cef6787a5", 00:08:38.252 "is_configured": false, 00:08:38.252 "data_offset": 0, 00:08:38.252 "data_size": 63488 00:08:38.252 }, 00:08:38.252 { 00:08:38.252 "name": "BaseBdev3", 00:08:38.252 "uuid": "6b526e45-c862-4217-acef-0f049c8d9764", 00:08:38.252 "is_configured": true, 00:08:38.252 "data_offset": 2048, 00:08:38.252 "data_size": 63488 00:08:38.252 } 00:08:38.252 ] 00:08:38.252 }' 00:08:38.252 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.252 03:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.512 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.512 03:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.512 03:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.512 03:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:38.512 03:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.512 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:38.512 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:38.512 03:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.512 03:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.512 [2024-11-18 03:08:42.022642] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:38.512 03:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.512 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:38.512 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.512 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.512 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.512 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.512 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.512 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.512 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.512 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.512 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.512 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.512 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.512 03:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.512 03:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.512 03:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.512 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.512 "name": "Existed_Raid", 00:08:38.512 "uuid": "f4fd4f69-8bf0-455e-8ab5-3d15de4d9fcd", 00:08:38.512 "strip_size_kb": 64, 00:08:38.512 "state": "configuring", 00:08:38.512 "raid_level": "raid0", 00:08:38.512 "superblock": true, 00:08:38.512 "num_base_bdevs": 3, 00:08:38.513 "num_base_bdevs_discovered": 1, 00:08:38.513 "num_base_bdevs_operational": 3, 00:08:38.513 "base_bdevs_list": [ 00:08:38.513 { 00:08:38.513 "name": null, 00:08:38.513 "uuid": "99b7b32a-b1c2-4364-b79c-8324e361ffe6", 00:08:38.513 "is_configured": false, 00:08:38.513 "data_offset": 0, 00:08:38.513 "data_size": 63488 00:08:38.513 }, 00:08:38.513 { 00:08:38.513 "name": null, 00:08:38.513 "uuid": "45dd8bcb-bb61-4e6e-946d-979cef6787a5", 00:08:38.513 "is_configured": false, 00:08:38.513 "data_offset": 0, 00:08:38.513 "data_size": 63488 00:08:38.513 }, 00:08:38.513 { 00:08:38.513 "name": "BaseBdev3", 00:08:38.513 "uuid": "6b526e45-c862-4217-acef-0f049c8d9764", 00:08:38.513 "is_configured": true, 00:08:38.513 "data_offset": 2048, 00:08:38.513 "data_size": 63488 00:08:38.513 } 00:08:38.513 ] 00:08:38.513 }' 00:08:38.513 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.513 03:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.081 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.082 03:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.082 03:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.082 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:39.082 03:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.082 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:39.082 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:39.082 03:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.082 03:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.082 [2024-11-18 03:08:42.496397] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:39.082 03:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.082 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:39.082 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.082 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.082 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.082 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.082 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.082 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.082 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.082 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.082 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.082 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.082 03:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.082 03:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.082 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.082 03:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.082 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.082 "name": "Existed_Raid", 00:08:39.082 "uuid": "f4fd4f69-8bf0-455e-8ab5-3d15de4d9fcd", 00:08:39.082 "strip_size_kb": 64, 00:08:39.082 "state": "configuring", 00:08:39.082 "raid_level": "raid0", 00:08:39.082 "superblock": true, 00:08:39.082 "num_base_bdevs": 3, 00:08:39.082 "num_base_bdevs_discovered": 2, 00:08:39.082 "num_base_bdevs_operational": 3, 00:08:39.082 "base_bdevs_list": [ 00:08:39.082 { 00:08:39.082 "name": null, 00:08:39.082 "uuid": "99b7b32a-b1c2-4364-b79c-8324e361ffe6", 00:08:39.082 "is_configured": false, 00:08:39.082 "data_offset": 0, 00:08:39.082 "data_size": 63488 00:08:39.082 }, 00:08:39.082 { 00:08:39.082 "name": "BaseBdev2", 00:08:39.082 "uuid": "45dd8bcb-bb61-4e6e-946d-979cef6787a5", 00:08:39.082 "is_configured": true, 00:08:39.082 "data_offset": 2048, 00:08:39.082 "data_size": 63488 00:08:39.082 }, 00:08:39.082 { 00:08:39.082 "name": "BaseBdev3", 00:08:39.082 "uuid": "6b526e45-c862-4217-acef-0f049c8d9764", 00:08:39.082 "is_configured": true, 00:08:39.082 "data_offset": 2048, 00:08:39.082 "data_size": 63488 00:08:39.082 } 00:08:39.082 ] 00:08:39.082 }' 00:08:39.082 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.082 03:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.650 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.650 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:39.650 03:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.650 03:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.650 03:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.650 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:39.650 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.650 03:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.650 03:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.650 03:08:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:39.650 03:08:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.650 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 99b7b32a-b1c2-4364-b79c-8324e361ffe6 00:08:39.650 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.650 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.651 [2024-11-18 03:08:43.046623] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:39.651 [2024-11-18 03:08:43.046801] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:39.651 [2024-11-18 03:08:43.046841] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:39.651 [2024-11-18 03:08:43.047122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:39.651 NewBaseBdev 00:08:39.651 [2024-11-18 03:08:43.047259] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:39.651 [2024-11-18 03:08:43.047272] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:39.651 [2024-11-18 03:08:43.047384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.651 [ 00:08:39.651 { 00:08:39.651 "name": "NewBaseBdev", 00:08:39.651 "aliases": [ 00:08:39.651 "99b7b32a-b1c2-4364-b79c-8324e361ffe6" 00:08:39.651 ], 00:08:39.651 "product_name": "Malloc disk", 00:08:39.651 "block_size": 512, 00:08:39.651 "num_blocks": 65536, 00:08:39.651 "uuid": "99b7b32a-b1c2-4364-b79c-8324e361ffe6", 00:08:39.651 "assigned_rate_limits": { 00:08:39.651 "rw_ios_per_sec": 0, 00:08:39.651 "rw_mbytes_per_sec": 0, 00:08:39.651 "r_mbytes_per_sec": 0, 00:08:39.651 "w_mbytes_per_sec": 0 00:08:39.651 }, 00:08:39.651 "claimed": true, 00:08:39.651 "claim_type": "exclusive_write", 00:08:39.651 "zoned": false, 00:08:39.651 "supported_io_types": { 00:08:39.651 "read": true, 00:08:39.651 "write": true, 00:08:39.651 "unmap": true, 00:08:39.651 "flush": true, 00:08:39.651 "reset": true, 00:08:39.651 "nvme_admin": false, 00:08:39.651 "nvme_io": false, 00:08:39.651 "nvme_io_md": false, 00:08:39.651 "write_zeroes": true, 00:08:39.651 "zcopy": true, 00:08:39.651 "get_zone_info": false, 00:08:39.651 "zone_management": false, 00:08:39.651 "zone_append": false, 00:08:39.651 "compare": false, 00:08:39.651 "compare_and_write": false, 00:08:39.651 "abort": true, 00:08:39.651 "seek_hole": false, 00:08:39.651 "seek_data": false, 00:08:39.651 "copy": true, 00:08:39.651 "nvme_iov_md": false 00:08:39.651 }, 00:08:39.651 "memory_domains": [ 00:08:39.651 { 00:08:39.651 "dma_device_id": "system", 00:08:39.651 "dma_device_type": 1 00:08:39.651 }, 00:08:39.651 { 00:08:39.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.651 "dma_device_type": 2 00:08:39.651 } 00:08:39.651 ], 00:08:39.651 "driver_specific": {} 00:08:39.651 } 00:08:39.651 ] 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.651 "name": "Existed_Raid", 00:08:39.651 "uuid": "f4fd4f69-8bf0-455e-8ab5-3d15de4d9fcd", 00:08:39.651 "strip_size_kb": 64, 00:08:39.651 "state": "online", 00:08:39.651 "raid_level": "raid0", 00:08:39.651 "superblock": true, 00:08:39.651 "num_base_bdevs": 3, 00:08:39.651 "num_base_bdevs_discovered": 3, 00:08:39.651 "num_base_bdevs_operational": 3, 00:08:39.651 "base_bdevs_list": [ 00:08:39.651 { 00:08:39.651 "name": "NewBaseBdev", 00:08:39.651 "uuid": "99b7b32a-b1c2-4364-b79c-8324e361ffe6", 00:08:39.651 "is_configured": true, 00:08:39.651 "data_offset": 2048, 00:08:39.651 "data_size": 63488 00:08:39.651 }, 00:08:39.651 { 00:08:39.651 "name": "BaseBdev2", 00:08:39.651 "uuid": "45dd8bcb-bb61-4e6e-946d-979cef6787a5", 00:08:39.651 "is_configured": true, 00:08:39.651 "data_offset": 2048, 00:08:39.651 "data_size": 63488 00:08:39.651 }, 00:08:39.651 { 00:08:39.651 "name": "BaseBdev3", 00:08:39.651 "uuid": "6b526e45-c862-4217-acef-0f049c8d9764", 00:08:39.651 "is_configured": true, 00:08:39.651 "data_offset": 2048, 00:08:39.651 "data_size": 63488 00:08:39.651 } 00:08:39.651 ] 00:08:39.651 }' 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.651 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.220 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:40.220 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:40.220 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:40.220 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:40.220 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:40.220 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:40.220 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:40.220 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:40.220 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.220 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.220 [2024-11-18 03:08:43.526247] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:40.220 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.220 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:40.220 "name": "Existed_Raid", 00:08:40.220 "aliases": [ 00:08:40.220 "f4fd4f69-8bf0-455e-8ab5-3d15de4d9fcd" 00:08:40.220 ], 00:08:40.220 "product_name": "Raid Volume", 00:08:40.220 "block_size": 512, 00:08:40.220 "num_blocks": 190464, 00:08:40.220 "uuid": "f4fd4f69-8bf0-455e-8ab5-3d15de4d9fcd", 00:08:40.220 "assigned_rate_limits": { 00:08:40.220 "rw_ios_per_sec": 0, 00:08:40.220 "rw_mbytes_per_sec": 0, 00:08:40.220 "r_mbytes_per_sec": 0, 00:08:40.220 "w_mbytes_per_sec": 0 00:08:40.220 }, 00:08:40.220 "claimed": false, 00:08:40.220 "zoned": false, 00:08:40.220 "supported_io_types": { 00:08:40.220 "read": true, 00:08:40.220 "write": true, 00:08:40.220 "unmap": true, 00:08:40.220 "flush": true, 00:08:40.220 "reset": true, 00:08:40.220 "nvme_admin": false, 00:08:40.220 "nvme_io": false, 00:08:40.220 "nvme_io_md": false, 00:08:40.220 "write_zeroes": true, 00:08:40.220 "zcopy": false, 00:08:40.220 "get_zone_info": false, 00:08:40.220 "zone_management": false, 00:08:40.220 "zone_append": false, 00:08:40.220 "compare": false, 00:08:40.220 "compare_and_write": false, 00:08:40.220 "abort": false, 00:08:40.220 "seek_hole": false, 00:08:40.220 "seek_data": false, 00:08:40.220 "copy": false, 00:08:40.220 "nvme_iov_md": false 00:08:40.220 }, 00:08:40.220 "memory_domains": [ 00:08:40.220 { 00:08:40.220 "dma_device_id": "system", 00:08:40.220 "dma_device_type": 1 00:08:40.220 }, 00:08:40.220 { 00:08:40.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.220 "dma_device_type": 2 00:08:40.220 }, 00:08:40.220 { 00:08:40.220 "dma_device_id": "system", 00:08:40.220 "dma_device_type": 1 00:08:40.220 }, 00:08:40.220 { 00:08:40.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.220 "dma_device_type": 2 00:08:40.220 }, 00:08:40.220 { 00:08:40.220 "dma_device_id": "system", 00:08:40.220 "dma_device_type": 1 00:08:40.220 }, 00:08:40.220 { 00:08:40.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.220 "dma_device_type": 2 00:08:40.220 } 00:08:40.220 ], 00:08:40.220 "driver_specific": { 00:08:40.220 "raid": { 00:08:40.220 "uuid": "f4fd4f69-8bf0-455e-8ab5-3d15de4d9fcd", 00:08:40.220 "strip_size_kb": 64, 00:08:40.220 "state": "online", 00:08:40.220 "raid_level": "raid0", 00:08:40.220 "superblock": true, 00:08:40.220 "num_base_bdevs": 3, 00:08:40.220 "num_base_bdevs_discovered": 3, 00:08:40.220 "num_base_bdevs_operational": 3, 00:08:40.220 "base_bdevs_list": [ 00:08:40.220 { 00:08:40.220 "name": "NewBaseBdev", 00:08:40.220 "uuid": "99b7b32a-b1c2-4364-b79c-8324e361ffe6", 00:08:40.220 "is_configured": true, 00:08:40.220 "data_offset": 2048, 00:08:40.220 "data_size": 63488 00:08:40.220 }, 00:08:40.220 { 00:08:40.220 "name": "BaseBdev2", 00:08:40.220 "uuid": "45dd8bcb-bb61-4e6e-946d-979cef6787a5", 00:08:40.220 "is_configured": true, 00:08:40.220 "data_offset": 2048, 00:08:40.220 "data_size": 63488 00:08:40.220 }, 00:08:40.220 { 00:08:40.220 "name": "BaseBdev3", 00:08:40.220 "uuid": "6b526e45-c862-4217-acef-0f049c8d9764", 00:08:40.220 "is_configured": true, 00:08:40.220 "data_offset": 2048, 00:08:40.220 "data_size": 63488 00:08:40.220 } 00:08:40.220 ] 00:08:40.220 } 00:08:40.220 } 00:08:40.220 }' 00:08:40.220 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:40.220 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:40.220 BaseBdev2 00:08:40.220 BaseBdev3' 00:08:40.220 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.220 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:40.220 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.220 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:40.220 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.221 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.221 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.221 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.221 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.221 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.221 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.221 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:40.221 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.221 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.221 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.221 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.221 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.221 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.221 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.221 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:40.221 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.221 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.221 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.221 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.221 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.221 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.221 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:40.221 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.221 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.221 [2024-11-18 03:08:43.785471] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:40.221 [2024-11-18 03:08:43.785507] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:40.221 [2024-11-18 03:08:43.785602] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:40.221 [2024-11-18 03:08:43.785659] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:40.221 [2024-11-18 03:08:43.785688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:40.221 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.221 03:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75786 00:08:40.221 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 75786 ']' 00:08:40.221 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 75786 00:08:40.221 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:40.480 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:40.480 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75786 00:08:40.480 killing process with pid 75786 00:08:40.480 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:40.480 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:40.480 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75786' 00:08:40.480 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 75786 00:08:40.480 [2024-11-18 03:08:43.833811] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:40.480 03:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 75786 00:08:40.480 [2024-11-18 03:08:43.865514] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:40.740 ************************************ 00:08:40.740 END TEST raid_state_function_test_sb 00:08:40.740 03:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:40.740 00:08:40.740 real 0m8.904s 00:08:40.740 user 0m15.250s 00:08:40.740 sys 0m1.688s 00:08:40.740 03:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.740 03:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.740 ************************************ 00:08:40.740 03:08:44 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:40.740 03:08:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:40.740 03:08:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.740 03:08:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:40.740 ************************************ 00:08:40.740 START TEST raid_superblock_test 00:08:40.740 ************************************ 00:08:40.740 03:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:08:40.740 03:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:40.740 03:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:40.740 03:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:40.740 03:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:40.740 03:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:40.740 03:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:40.740 03:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:40.740 03:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:40.740 03:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:40.740 03:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:40.740 03:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:40.740 03:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:40.740 03:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:40.740 03:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:40.740 03:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:40.740 03:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:40.740 03:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=76390 00:08:40.740 03:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:40.740 03:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 76390 00:08:40.740 03:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 76390 ']' 00:08:40.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.740 03:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.740 03:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:40.740 03:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.740 03:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:40.740 03:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.740 [2024-11-18 03:08:44.256192] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:40.740 [2024-11-18 03:08:44.256439] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76390 ] 00:08:41.000 [2024-11-18 03:08:44.417690] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.000 [2024-11-18 03:08:44.469192] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.000 [2024-11-18 03:08:44.511937] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.000 [2024-11-18 03:08:44.511989] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.568 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:41.568 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:41.568 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:41.568 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:41.568 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:41.568 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:41.568 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:41.568 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:41.568 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:41.568 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:41.568 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:41.568 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.568 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.829 malloc1 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.829 [2024-11-18 03:08:45.154418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:41.829 [2024-11-18 03:08:45.154553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.829 [2024-11-18 03:08:45.154593] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:41.829 [2024-11-18 03:08:45.154628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.829 [2024-11-18 03:08:45.157014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.829 [2024-11-18 03:08:45.157095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:41.829 pt1 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.829 malloc2 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.829 [2024-11-18 03:08:45.196780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:41.829 [2024-11-18 03:08:45.196854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.829 [2024-11-18 03:08:45.196874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:41.829 [2024-11-18 03:08:45.196886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.829 [2024-11-18 03:08:45.199541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.829 [2024-11-18 03:08:45.199590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:41.829 pt2 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.829 malloc3 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.829 [2024-11-18 03:08:45.225548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:41.829 [2024-11-18 03:08:45.225657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.829 [2024-11-18 03:08:45.225693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:41.829 [2024-11-18 03:08:45.225723] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.829 [2024-11-18 03:08:45.228013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.829 [2024-11-18 03:08:45.228093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:41.829 pt3 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.829 [2024-11-18 03:08:45.237580] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:41.829 [2024-11-18 03:08:45.239612] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:41.829 [2024-11-18 03:08:45.239742] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:41.829 [2024-11-18 03:08:45.239927] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:41.829 [2024-11-18 03:08:45.239989] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:41.829 [2024-11-18 03:08:45.240303] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:41.829 [2024-11-18 03:08:45.240489] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:41.829 [2024-11-18 03:08:45.240540] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:41.829 [2024-11-18 03:08:45.240723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:41.829 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:41.830 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.830 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.830 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.830 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.830 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.830 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.830 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.830 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.830 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.830 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:41.830 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.830 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.830 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.830 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.830 "name": "raid_bdev1", 00:08:41.830 "uuid": "69fa8bfb-3148-4c89-a049-028110952fd7", 00:08:41.830 "strip_size_kb": 64, 00:08:41.830 "state": "online", 00:08:41.830 "raid_level": "raid0", 00:08:41.830 "superblock": true, 00:08:41.830 "num_base_bdevs": 3, 00:08:41.830 "num_base_bdevs_discovered": 3, 00:08:41.830 "num_base_bdevs_operational": 3, 00:08:41.830 "base_bdevs_list": [ 00:08:41.830 { 00:08:41.830 "name": "pt1", 00:08:41.830 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:41.830 "is_configured": true, 00:08:41.830 "data_offset": 2048, 00:08:41.830 "data_size": 63488 00:08:41.830 }, 00:08:41.830 { 00:08:41.830 "name": "pt2", 00:08:41.830 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:41.830 "is_configured": true, 00:08:41.830 "data_offset": 2048, 00:08:41.830 "data_size": 63488 00:08:41.830 }, 00:08:41.830 { 00:08:41.830 "name": "pt3", 00:08:41.830 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:41.830 "is_configured": true, 00:08:41.830 "data_offset": 2048, 00:08:41.830 "data_size": 63488 00:08:41.830 } 00:08:41.830 ] 00:08:41.830 }' 00:08:41.830 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.830 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.399 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:42.399 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:42.399 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:42.399 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:42.399 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:42.399 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:42.399 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:42.399 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:42.399 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.399 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.399 [2024-11-18 03:08:45.713163] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.399 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.399 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:42.399 "name": "raid_bdev1", 00:08:42.399 "aliases": [ 00:08:42.399 "69fa8bfb-3148-4c89-a049-028110952fd7" 00:08:42.399 ], 00:08:42.399 "product_name": "Raid Volume", 00:08:42.399 "block_size": 512, 00:08:42.399 "num_blocks": 190464, 00:08:42.399 "uuid": "69fa8bfb-3148-4c89-a049-028110952fd7", 00:08:42.399 "assigned_rate_limits": { 00:08:42.399 "rw_ios_per_sec": 0, 00:08:42.399 "rw_mbytes_per_sec": 0, 00:08:42.399 "r_mbytes_per_sec": 0, 00:08:42.399 "w_mbytes_per_sec": 0 00:08:42.399 }, 00:08:42.399 "claimed": false, 00:08:42.399 "zoned": false, 00:08:42.399 "supported_io_types": { 00:08:42.399 "read": true, 00:08:42.399 "write": true, 00:08:42.399 "unmap": true, 00:08:42.399 "flush": true, 00:08:42.399 "reset": true, 00:08:42.399 "nvme_admin": false, 00:08:42.399 "nvme_io": false, 00:08:42.399 "nvme_io_md": false, 00:08:42.399 "write_zeroes": true, 00:08:42.399 "zcopy": false, 00:08:42.399 "get_zone_info": false, 00:08:42.399 "zone_management": false, 00:08:42.399 "zone_append": false, 00:08:42.399 "compare": false, 00:08:42.399 "compare_and_write": false, 00:08:42.399 "abort": false, 00:08:42.399 "seek_hole": false, 00:08:42.399 "seek_data": false, 00:08:42.399 "copy": false, 00:08:42.399 "nvme_iov_md": false 00:08:42.399 }, 00:08:42.399 "memory_domains": [ 00:08:42.399 { 00:08:42.399 "dma_device_id": "system", 00:08:42.399 "dma_device_type": 1 00:08:42.399 }, 00:08:42.399 { 00:08:42.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.399 "dma_device_type": 2 00:08:42.399 }, 00:08:42.399 { 00:08:42.399 "dma_device_id": "system", 00:08:42.399 "dma_device_type": 1 00:08:42.399 }, 00:08:42.399 { 00:08:42.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.399 "dma_device_type": 2 00:08:42.399 }, 00:08:42.399 { 00:08:42.399 "dma_device_id": "system", 00:08:42.399 "dma_device_type": 1 00:08:42.399 }, 00:08:42.399 { 00:08:42.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.399 "dma_device_type": 2 00:08:42.399 } 00:08:42.399 ], 00:08:42.399 "driver_specific": { 00:08:42.399 "raid": { 00:08:42.399 "uuid": "69fa8bfb-3148-4c89-a049-028110952fd7", 00:08:42.400 "strip_size_kb": 64, 00:08:42.400 "state": "online", 00:08:42.400 "raid_level": "raid0", 00:08:42.400 "superblock": true, 00:08:42.400 "num_base_bdevs": 3, 00:08:42.400 "num_base_bdevs_discovered": 3, 00:08:42.400 "num_base_bdevs_operational": 3, 00:08:42.400 "base_bdevs_list": [ 00:08:42.400 { 00:08:42.400 "name": "pt1", 00:08:42.400 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:42.400 "is_configured": true, 00:08:42.400 "data_offset": 2048, 00:08:42.400 "data_size": 63488 00:08:42.400 }, 00:08:42.400 { 00:08:42.400 "name": "pt2", 00:08:42.400 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:42.400 "is_configured": true, 00:08:42.400 "data_offset": 2048, 00:08:42.400 "data_size": 63488 00:08:42.400 }, 00:08:42.400 { 00:08:42.400 "name": "pt3", 00:08:42.400 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:42.400 "is_configured": true, 00:08:42.400 "data_offset": 2048, 00:08:42.400 "data_size": 63488 00:08:42.400 } 00:08:42.400 ] 00:08:42.400 } 00:08:42.400 } 00:08:42.400 }' 00:08:42.400 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:42.400 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:42.400 pt2 00:08:42.400 pt3' 00:08:42.400 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.400 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:42.400 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.400 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:42.400 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.400 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.400 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.400 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.400 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.400 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.400 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.400 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:42.400 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.400 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.400 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.400 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.400 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.400 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.400 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.400 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:42.400 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.400 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.400 03:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.400 03:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.660 [2024-11-18 03:08:46.016546] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=69fa8bfb-3148-4c89-a049-028110952fd7 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 69fa8bfb-3148-4c89-a049-028110952fd7 ']' 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.660 [2024-11-18 03:08:46.060153] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:42.660 [2024-11-18 03:08:46.060187] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:42.660 [2024-11-18 03:08:46.060285] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.660 [2024-11-18 03:08:46.060356] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:42.660 [2024-11-18 03:08:46.060370] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.660 [2024-11-18 03:08:46.199985] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:42.660 [2024-11-18 03:08:46.202116] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:42.660 [2024-11-18 03:08:46.202214] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:42.660 [2024-11-18 03:08:46.202293] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:42.660 [2024-11-18 03:08:46.202384] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:42.660 [2024-11-18 03:08:46.202463] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:42.660 [2024-11-18 03:08:46.202480] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:42.660 [2024-11-18 03:08:46.202494] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:08:42.660 request: 00:08:42.660 { 00:08:42.660 "name": "raid_bdev1", 00:08:42.660 "raid_level": "raid0", 00:08:42.660 "base_bdevs": [ 00:08:42.660 "malloc1", 00:08:42.660 "malloc2", 00:08:42.660 "malloc3" 00:08:42.660 ], 00:08:42.660 "strip_size_kb": 64, 00:08:42.660 "superblock": false, 00:08:42.660 "method": "bdev_raid_create", 00:08:42.660 "req_id": 1 00:08:42.660 } 00:08:42.660 Got JSON-RPC error response 00:08:42.660 response: 00:08:42.660 { 00:08:42.660 "code": -17, 00:08:42.660 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:42.660 } 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.660 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.920 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:42.920 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:42.920 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:42.920 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.920 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.920 [2024-11-18 03:08:46.255850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:42.920 [2024-11-18 03:08:46.255976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.920 [2024-11-18 03:08:46.256024] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:42.920 [2024-11-18 03:08:46.256064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.920 [2024-11-18 03:08:46.258393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.920 [2024-11-18 03:08:46.258472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:42.920 [2024-11-18 03:08:46.258594] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:42.920 [2024-11-18 03:08:46.258663] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:42.920 pt1 00:08:42.920 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.920 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:42.920 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:42.920 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.920 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.920 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.920 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.920 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.920 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.920 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.920 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.920 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.920 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.920 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.920 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.920 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.920 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.920 "name": "raid_bdev1", 00:08:42.920 "uuid": "69fa8bfb-3148-4c89-a049-028110952fd7", 00:08:42.920 "strip_size_kb": 64, 00:08:42.920 "state": "configuring", 00:08:42.920 "raid_level": "raid0", 00:08:42.920 "superblock": true, 00:08:42.920 "num_base_bdevs": 3, 00:08:42.920 "num_base_bdevs_discovered": 1, 00:08:42.920 "num_base_bdevs_operational": 3, 00:08:42.920 "base_bdevs_list": [ 00:08:42.920 { 00:08:42.920 "name": "pt1", 00:08:42.920 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:42.920 "is_configured": true, 00:08:42.920 "data_offset": 2048, 00:08:42.920 "data_size": 63488 00:08:42.920 }, 00:08:42.920 { 00:08:42.920 "name": null, 00:08:42.920 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:42.920 "is_configured": false, 00:08:42.920 "data_offset": 2048, 00:08:42.920 "data_size": 63488 00:08:42.920 }, 00:08:42.920 { 00:08:42.920 "name": null, 00:08:42.920 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:42.920 "is_configured": false, 00:08:42.920 "data_offset": 2048, 00:08:42.921 "data_size": 63488 00:08:42.921 } 00:08:42.921 ] 00:08:42.921 }' 00:08:42.921 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.921 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.180 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:43.180 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:43.180 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.180 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.180 [2024-11-18 03:08:46.711110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:43.180 [2024-11-18 03:08:46.711191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.180 [2024-11-18 03:08:46.711212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:43.180 [2024-11-18 03:08:46.711227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.180 [2024-11-18 03:08:46.711651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.180 [2024-11-18 03:08:46.711688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:43.180 [2024-11-18 03:08:46.711764] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:43.180 [2024-11-18 03:08:46.711788] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:43.180 pt2 00:08:43.180 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.180 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:43.180 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.180 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.180 [2024-11-18 03:08:46.723117] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:43.180 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.180 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:43.180 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.180 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.180 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.180 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.180 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.180 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.180 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.180 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.180 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.180 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.180 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.180 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.180 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.180 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.439 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.439 "name": "raid_bdev1", 00:08:43.439 "uuid": "69fa8bfb-3148-4c89-a049-028110952fd7", 00:08:43.439 "strip_size_kb": 64, 00:08:43.439 "state": "configuring", 00:08:43.439 "raid_level": "raid0", 00:08:43.439 "superblock": true, 00:08:43.439 "num_base_bdevs": 3, 00:08:43.439 "num_base_bdevs_discovered": 1, 00:08:43.439 "num_base_bdevs_operational": 3, 00:08:43.439 "base_bdevs_list": [ 00:08:43.439 { 00:08:43.439 "name": "pt1", 00:08:43.439 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.439 "is_configured": true, 00:08:43.439 "data_offset": 2048, 00:08:43.439 "data_size": 63488 00:08:43.439 }, 00:08:43.439 { 00:08:43.439 "name": null, 00:08:43.439 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.439 "is_configured": false, 00:08:43.439 "data_offset": 0, 00:08:43.439 "data_size": 63488 00:08:43.439 }, 00:08:43.439 { 00:08:43.439 "name": null, 00:08:43.439 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:43.439 "is_configured": false, 00:08:43.439 "data_offset": 2048, 00:08:43.439 "data_size": 63488 00:08:43.439 } 00:08:43.439 ] 00:08:43.439 }' 00:08:43.439 03:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.440 03:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.699 [2024-11-18 03:08:47.190264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:43.699 [2024-11-18 03:08:47.190397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.699 [2024-11-18 03:08:47.190457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:43.699 [2024-11-18 03:08:47.190489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.699 [2024-11-18 03:08:47.190940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.699 [2024-11-18 03:08:47.191017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:43.699 [2024-11-18 03:08:47.191137] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:43.699 [2024-11-18 03:08:47.191200] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:43.699 pt2 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.699 [2024-11-18 03:08:47.202212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:43.699 [2024-11-18 03:08:47.202306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.699 [2024-11-18 03:08:47.202342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:43.699 [2024-11-18 03:08:47.202369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.699 [2024-11-18 03:08:47.202795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.699 [2024-11-18 03:08:47.202854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:43.699 [2024-11-18 03:08:47.202956] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:43.699 [2024-11-18 03:08:47.203019] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:43.699 [2024-11-18 03:08:47.203142] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:43.699 [2024-11-18 03:08:47.203192] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:43.699 [2024-11-18 03:08:47.203464] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:43.699 [2024-11-18 03:08:47.203615] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:43.699 [2024-11-18 03:08:47.203660] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:43.699 [2024-11-18 03:08:47.203808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.699 pt3 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.699 "name": "raid_bdev1", 00:08:43.699 "uuid": "69fa8bfb-3148-4c89-a049-028110952fd7", 00:08:43.699 "strip_size_kb": 64, 00:08:43.699 "state": "online", 00:08:43.699 "raid_level": "raid0", 00:08:43.699 "superblock": true, 00:08:43.699 "num_base_bdevs": 3, 00:08:43.699 "num_base_bdevs_discovered": 3, 00:08:43.699 "num_base_bdevs_operational": 3, 00:08:43.699 "base_bdevs_list": [ 00:08:43.699 { 00:08:43.699 "name": "pt1", 00:08:43.699 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.699 "is_configured": true, 00:08:43.699 "data_offset": 2048, 00:08:43.699 "data_size": 63488 00:08:43.699 }, 00:08:43.699 { 00:08:43.699 "name": "pt2", 00:08:43.699 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.699 "is_configured": true, 00:08:43.699 "data_offset": 2048, 00:08:43.699 "data_size": 63488 00:08:43.699 }, 00:08:43.699 { 00:08:43.699 "name": "pt3", 00:08:43.699 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:43.699 "is_configured": true, 00:08:43.699 "data_offset": 2048, 00:08:43.699 "data_size": 63488 00:08:43.699 } 00:08:43.699 ] 00:08:43.699 }' 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.699 03:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.267 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:44.267 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:44.267 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:44.267 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:44.267 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:44.267 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:44.267 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:44.267 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:44.267 03:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.267 03:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.267 [2024-11-18 03:08:47.673764] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.267 03:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.267 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:44.267 "name": "raid_bdev1", 00:08:44.267 "aliases": [ 00:08:44.267 "69fa8bfb-3148-4c89-a049-028110952fd7" 00:08:44.267 ], 00:08:44.267 "product_name": "Raid Volume", 00:08:44.267 "block_size": 512, 00:08:44.267 "num_blocks": 190464, 00:08:44.267 "uuid": "69fa8bfb-3148-4c89-a049-028110952fd7", 00:08:44.267 "assigned_rate_limits": { 00:08:44.267 "rw_ios_per_sec": 0, 00:08:44.267 "rw_mbytes_per_sec": 0, 00:08:44.268 "r_mbytes_per_sec": 0, 00:08:44.268 "w_mbytes_per_sec": 0 00:08:44.268 }, 00:08:44.268 "claimed": false, 00:08:44.268 "zoned": false, 00:08:44.268 "supported_io_types": { 00:08:44.268 "read": true, 00:08:44.268 "write": true, 00:08:44.268 "unmap": true, 00:08:44.268 "flush": true, 00:08:44.268 "reset": true, 00:08:44.268 "nvme_admin": false, 00:08:44.268 "nvme_io": false, 00:08:44.268 "nvme_io_md": false, 00:08:44.268 "write_zeroes": true, 00:08:44.268 "zcopy": false, 00:08:44.268 "get_zone_info": false, 00:08:44.268 "zone_management": false, 00:08:44.268 "zone_append": false, 00:08:44.268 "compare": false, 00:08:44.268 "compare_and_write": false, 00:08:44.268 "abort": false, 00:08:44.268 "seek_hole": false, 00:08:44.268 "seek_data": false, 00:08:44.268 "copy": false, 00:08:44.268 "nvme_iov_md": false 00:08:44.268 }, 00:08:44.268 "memory_domains": [ 00:08:44.268 { 00:08:44.268 "dma_device_id": "system", 00:08:44.268 "dma_device_type": 1 00:08:44.268 }, 00:08:44.268 { 00:08:44.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.268 "dma_device_type": 2 00:08:44.268 }, 00:08:44.268 { 00:08:44.268 "dma_device_id": "system", 00:08:44.268 "dma_device_type": 1 00:08:44.268 }, 00:08:44.268 { 00:08:44.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.268 "dma_device_type": 2 00:08:44.268 }, 00:08:44.268 { 00:08:44.268 "dma_device_id": "system", 00:08:44.268 "dma_device_type": 1 00:08:44.268 }, 00:08:44.268 { 00:08:44.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.268 "dma_device_type": 2 00:08:44.268 } 00:08:44.268 ], 00:08:44.268 "driver_specific": { 00:08:44.268 "raid": { 00:08:44.268 "uuid": "69fa8bfb-3148-4c89-a049-028110952fd7", 00:08:44.268 "strip_size_kb": 64, 00:08:44.268 "state": "online", 00:08:44.268 "raid_level": "raid0", 00:08:44.268 "superblock": true, 00:08:44.268 "num_base_bdevs": 3, 00:08:44.268 "num_base_bdevs_discovered": 3, 00:08:44.268 "num_base_bdevs_operational": 3, 00:08:44.268 "base_bdevs_list": [ 00:08:44.268 { 00:08:44.268 "name": "pt1", 00:08:44.268 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:44.268 "is_configured": true, 00:08:44.268 "data_offset": 2048, 00:08:44.268 "data_size": 63488 00:08:44.268 }, 00:08:44.268 { 00:08:44.268 "name": "pt2", 00:08:44.268 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.268 "is_configured": true, 00:08:44.268 "data_offset": 2048, 00:08:44.268 "data_size": 63488 00:08:44.268 }, 00:08:44.268 { 00:08:44.268 "name": "pt3", 00:08:44.268 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:44.268 "is_configured": true, 00:08:44.268 "data_offset": 2048, 00:08:44.268 "data_size": 63488 00:08:44.268 } 00:08:44.268 ] 00:08:44.268 } 00:08:44.268 } 00:08:44.268 }' 00:08:44.268 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:44.268 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:44.268 pt2 00:08:44.268 pt3' 00:08:44.268 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.268 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:44.268 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.268 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:44.268 03:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.268 03:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.268 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.268 03:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.527 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.527 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.527 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.527 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.527 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:44.527 03:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.527 03:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.527 03:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.527 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.527 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.527 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.527 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:44.527 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.527 03:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.527 03:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.527 03:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.527 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.527 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.527 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:44.528 03:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:44.528 03:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.528 03:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.528 [2024-11-18 03:08:47.977268] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.528 03:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.528 03:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 69fa8bfb-3148-4c89-a049-028110952fd7 '!=' 69fa8bfb-3148-4c89-a049-028110952fd7 ']' 00:08:44.528 03:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:44.528 03:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:44.528 03:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:44.528 03:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 76390 00:08:44.528 03:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 76390 ']' 00:08:44.528 03:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 76390 00:08:44.528 03:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:44.528 03:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:44.528 03:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76390 00:08:44.528 03:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:44.528 03:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:44.528 03:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76390' 00:08:44.528 killing process with pid 76390 00:08:44.528 03:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 76390 00:08:44.528 [2024-11-18 03:08:48.055018] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:44.528 [2024-11-18 03:08:48.055194] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.528 [2024-11-18 03:08:48.055297] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:44.528 [2024-11-18 03:08:48.055351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, sta 03:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 76390 00:08:44.528 te offline 00:08:44.528 [2024-11-18 03:08:48.089482] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:44.786 03:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:44.786 00:08:44.786 real 0m4.158s 00:08:44.786 user 0m6.624s 00:08:44.786 sys 0m0.857s 00:08:44.786 03:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.786 03:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.786 ************************************ 00:08:44.786 END TEST raid_superblock_test 00:08:44.786 ************************************ 00:08:45.045 03:08:48 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:45.045 03:08:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:45.045 03:08:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.045 03:08:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:45.045 ************************************ 00:08:45.045 START TEST raid_read_error_test 00:08:45.045 ************************************ 00:08:45.045 03:08:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:08:45.045 03:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:45.045 03:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:45.045 03:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:45.045 03:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:45.045 03:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:45.045 03:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:45.045 03:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:45.045 03:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:45.045 03:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:45.045 03:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:45.045 03:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:45.045 03:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:45.045 03:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:45.045 03:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:45.045 03:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:45.045 03:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:45.045 03:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:45.045 03:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:45.045 03:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:45.045 03:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:45.045 03:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:45.045 03:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:45.045 03:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:45.045 03:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:45.045 03:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:45.045 03:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.oFuhfisHta 00:08:45.045 03:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76632 00:08:45.045 03:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76632 00:08:45.045 03:08:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:45.046 03:08:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 76632 ']' 00:08:45.046 03:08:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.046 03:08:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:45.046 03:08:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.046 03:08:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:45.046 03:08:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.046 [2024-11-18 03:08:48.498120] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:45.046 [2024-11-18 03:08:48.498794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76632 ] 00:08:45.304 [2024-11-18 03:08:48.658753] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.304 [2024-11-18 03:08:48.708937] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.304 [2024-11-18 03:08:48.751610] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.304 [2024-11-18 03:08:48.751651] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.872 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:45.872 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:45.872 03:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:45.872 03:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:45.872 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.872 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.872 BaseBdev1_malloc 00:08:45.872 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.872 03:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:45.872 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.872 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.872 true 00:08:45.872 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.872 03:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:45.872 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.872 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.872 [2024-11-18 03:08:49.370168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:45.872 [2024-11-18 03:08:49.370223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.872 [2024-11-18 03:08:49.370254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:45.872 [2024-11-18 03:08:49.370270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.872 [2024-11-18 03:08:49.372570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.873 [2024-11-18 03:08:49.372613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:45.873 BaseBdev1 00:08:45.873 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.873 03:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:45.873 03:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:45.873 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.873 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.873 BaseBdev2_malloc 00:08:45.873 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.873 03:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:45.873 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.873 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.873 true 00:08:45.873 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.873 03:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:45.873 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.873 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.873 [2024-11-18 03:08:49.415190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:45.873 [2024-11-18 03:08:49.415250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.873 [2024-11-18 03:08:49.415272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:45.873 [2024-11-18 03:08:49.415281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.873 [2024-11-18 03:08:49.417490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.873 [2024-11-18 03:08:49.417532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:45.873 BaseBdev2 00:08:45.873 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.873 03:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:45.873 03:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:45.873 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.873 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.873 BaseBdev3_malloc 00:08:45.873 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.873 03:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:45.873 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.873 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.132 true 00:08:46.132 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.132 03:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:46.132 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.132 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.132 [2024-11-18 03:08:49.455996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:46.132 [2024-11-18 03:08:49.456054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.132 [2024-11-18 03:08:49.456076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:46.132 [2024-11-18 03:08:49.456085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.132 [2024-11-18 03:08:49.458373] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.132 [2024-11-18 03:08:49.458474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:46.132 BaseBdev3 00:08:46.132 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.132 03:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:46.133 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.133 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.133 [2024-11-18 03:08:49.468038] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:46.133 [2024-11-18 03:08:49.470025] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:46.133 [2024-11-18 03:08:49.470117] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:46.133 [2024-11-18 03:08:49.470313] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:46.133 [2024-11-18 03:08:49.470330] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:46.133 [2024-11-18 03:08:49.470622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:46.133 [2024-11-18 03:08:49.470781] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:46.133 [2024-11-18 03:08:49.470793] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:46.133 [2024-11-18 03:08:49.470982] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.133 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.133 03:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:46.133 03:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:46.133 03:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.133 03:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.133 03:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.133 03:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.133 03:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.133 03:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.133 03:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.133 03:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.133 03:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.133 03:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.133 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.133 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.133 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.133 03:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.133 "name": "raid_bdev1", 00:08:46.133 "uuid": "c00af485-0087-4b32-8a0d-9ce002cf988d", 00:08:46.133 "strip_size_kb": 64, 00:08:46.133 "state": "online", 00:08:46.133 "raid_level": "raid0", 00:08:46.133 "superblock": true, 00:08:46.133 "num_base_bdevs": 3, 00:08:46.133 "num_base_bdevs_discovered": 3, 00:08:46.133 "num_base_bdevs_operational": 3, 00:08:46.133 "base_bdevs_list": [ 00:08:46.133 { 00:08:46.133 "name": "BaseBdev1", 00:08:46.133 "uuid": "da5b21fb-09ed-5e98-a92e-95a8ea6f7579", 00:08:46.133 "is_configured": true, 00:08:46.133 "data_offset": 2048, 00:08:46.133 "data_size": 63488 00:08:46.133 }, 00:08:46.133 { 00:08:46.133 "name": "BaseBdev2", 00:08:46.133 "uuid": "5ff06537-1334-5077-949f-ce6333cad7ae", 00:08:46.133 "is_configured": true, 00:08:46.133 "data_offset": 2048, 00:08:46.133 "data_size": 63488 00:08:46.133 }, 00:08:46.133 { 00:08:46.133 "name": "BaseBdev3", 00:08:46.133 "uuid": "3e538bfe-6ac1-5e10-a062-f87960d649ea", 00:08:46.133 "is_configured": true, 00:08:46.133 "data_offset": 2048, 00:08:46.133 "data_size": 63488 00:08:46.133 } 00:08:46.133 ] 00:08:46.133 }' 00:08:46.133 03:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.133 03:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.399 03:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:46.399 03:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:46.670 [2024-11-18 03:08:50.031447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:47.613 03:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:47.613 03:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.613 03:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.614 03:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.614 03:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:47.614 03:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:47.614 03:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:47.614 03:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:47.614 03:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:47.614 03:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:47.614 03:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.614 03:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.614 03:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.614 03:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.614 03:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.614 03:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.614 03:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.614 03:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:47.614 03:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.614 03:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.614 03:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.614 03:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.614 03:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.614 "name": "raid_bdev1", 00:08:47.614 "uuid": "c00af485-0087-4b32-8a0d-9ce002cf988d", 00:08:47.614 "strip_size_kb": 64, 00:08:47.614 "state": "online", 00:08:47.614 "raid_level": "raid0", 00:08:47.614 "superblock": true, 00:08:47.614 "num_base_bdevs": 3, 00:08:47.614 "num_base_bdevs_discovered": 3, 00:08:47.614 "num_base_bdevs_operational": 3, 00:08:47.614 "base_bdevs_list": [ 00:08:47.614 { 00:08:47.614 "name": "BaseBdev1", 00:08:47.614 "uuid": "da5b21fb-09ed-5e98-a92e-95a8ea6f7579", 00:08:47.614 "is_configured": true, 00:08:47.614 "data_offset": 2048, 00:08:47.614 "data_size": 63488 00:08:47.614 }, 00:08:47.614 { 00:08:47.614 "name": "BaseBdev2", 00:08:47.614 "uuid": "5ff06537-1334-5077-949f-ce6333cad7ae", 00:08:47.614 "is_configured": true, 00:08:47.614 "data_offset": 2048, 00:08:47.614 "data_size": 63488 00:08:47.614 }, 00:08:47.614 { 00:08:47.614 "name": "BaseBdev3", 00:08:47.614 "uuid": "3e538bfe-6ac1-5e10-a062-f87960d649ea", 00:08:47.614 "is_configured": true, 00:08:47.614 "data_offset": 2048, 00:08:47.614 "data_size": 63488 00:08:47.614 } 00:08:47.614 ] 00:08:47.614 }' 00:08:47.614 03:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.614 03:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.872 03:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:47.872 03:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.872 03:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.872 [2024-11-18 03:08:51.388116] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:47.872 [2024-11-18 03:08:51.388241] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:47.872 [2024-11-18 03:08:51.391173] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.872 [2024-11-18 03:08:51.391222] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.872 [2024-11-18 03:08:51.391277] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:47.872 [2024-11-18 03:08:51.391290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:47.872 { 00:08:47.872 "results": [ 00:08:47.872 { 00:08:47.872 "job": "raid_bdev1", 00:08:47.872 "core_mask": "0x1", 00:08:47.872 "workload": "randrw", 00:08:47.872 "percentage": 50, 00:08:47.872 "status": "finished", 00:08:47.872 "queue_depth": 1, 00:08:47.872 "io_size": 131072, 00:08:47.872 "runtime": 1.357375, 00:08:47.872 "iops": 15614.697485956349, 00:08:47.872 "mibps": 1951.8371857445436, 00:08:47.872 "io_failed": 1, 00:08:47.872 "io_timeout": 0, 00:08:47.872 "avg_latency_us": 88.7337325737492, 00:08:47.872 "min_latency_us": 27.72401746724891, 00:08:47.872 "max_latency_us": 1638.4 00:08:47.872 } 00:08:47.872 ], 00:08:47.872 "core_count": 1 00:08:47.872 } 00:08:47.872 03:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.872 03:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76632 00:08:47.872 03:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 76632 ']' 00:08:47.872 03:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 76632 00:08:47.872 03:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:47.872 03:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:47.872 03:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76632 00:08:47.872 03:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:47.872 03:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:47.872 03:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76632' 00:08:47.872 killing process with pid 76632 00:08:47.872 03:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 76632 00:08:47.873 [2024-11-18 03:08:51.438466] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:47.873 03:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 76632 00:08:48.131 [2024-11-18 03:08:51.465255] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:48.131 03:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.oFuhfisHta 00:08:48.131 03:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:48.131 03:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:48.390 03:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:48.390 03:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:48.390 03:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:48.390 03:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:48.390 ************************************ 00:08:48.390 END TEST raid_read_error_test 00:08:48.390 ************************************ 00:08:48.390 03:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:48.390 00:08:48.390 real 0m3.319s 00:08:48.390 user 0m4.238s 00:08:48.390 sys 0m0.521s 00:08:48.390 03:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:48.390 03:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.390 03:08:51 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:48.390 03:08:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:48.390 03:08:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:48.390 03:08:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:48.390 ************************************ 00:08:48.390 START TEST raid_write_error_test 00:08:48.390 ************************************ 00:08:48.390 03:08:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:08:48.390 03:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:48.390 03:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:48.390 03:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:48.390 03:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:48.390 03:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:48.391 03:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:48.391 03:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:48.391 03:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:48.391 03:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:48.391 03:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:48.391 03:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:48.391 03:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:48.391 03:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:48.391 03:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:48.391 03:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:48.391 03:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:48.391 03:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:48.391 03:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:48.391 03:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:48.391 03:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:48.391 03:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:48.391 03:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:48.391 03:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:48.391 03:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:48.391 03:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:48.391 03:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Nqv09Kf6t6 00:08:48.391 03:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76761 00:08:48.391 03:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:48.391 03:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76761 00:08:48.391 03:08:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 76761 ']' 00:08:48.391 03:08:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.391 03:08:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:48.391 03:08:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.391 03:08:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:48.391 03:08:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.391 [2024-11-18 03:08:51.893108] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:48.391 [2024-11-18 03:08:51.893357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76761 ] 00:08:48.650 [2024-11-18 03:08:52.057228] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.650 [2024-11-18 03:08:52.109122] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.650 [2024-11-18 03:08:52.153914] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.650 [2024-11-18 03:08:52.153949] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.217 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:49.217 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:49.217 03:08:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:49.217 03:08:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:49.217 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.217 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.217 BaseBdev1_malloc 00:08:49.217 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.217 03:08:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:49.217 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.217 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.217 true 00:08:49.217 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.217 03:08:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:49.217 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.217 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.477 [2024-11-18 03:08:52.793205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:49.477 [2024-11-18 03:08:52.793333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.477 [2024-11-18 03:08:52.793382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:49.477 [2024-11-18 03:08:52.793418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.477 [2024-11-18 03:08:52.795699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.477 [2024-11-18 03:08:52.795779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:49.477 BaseBdev1 00:08:49.477 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.477 03:08:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:49.477 03:08:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:49.477 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.477 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.477 BaseBdev2_malloc 00:08:49.477 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.477 03:08:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:49.477 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.477 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.477 true 00:08:49.477 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.477 03:08:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:49.477 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.477 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.477 [2024-11-18 03:08:52.841599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:49.477 [2024-11-18 03:08:52.841723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.477 [2024-11-18 03:08:52.841764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:49.477 [2024-11-18 03:08:52.841798] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.477 [2024-11-18 03:08:52.844095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.477 [2024-11-18 03:08:52.844178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:49.477 BaseBdev2 00:08:49.477 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.477 03:08:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:49.477 03:08:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:49.477 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.477 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.477 BaseBdev3_malloc 00:08:49.477 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.477 03:08:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:49.477 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.477 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.477 true 00:08:49.477 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.477 03:08:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:49.477 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.477 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.477 [2024-11-18 03:08:52.882707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:49.477 [2024-11-18 03:08:52.882818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.477 [2024-11-18 03:08:52.882861] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:49.477 [2024-11-18 03:08:52.882893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.477 [2024-11-18 03:08:52.885239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.477 [2024-11-18 03:08:52.885320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:49.477 BaseBdev3 00:08:49.477 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.477 03:08:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:49.477 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.477 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.477 [2024-11-18 03:08:52.894734] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.478 [2024-11-18 03:08:52.896798] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:49.478 [2024-11-18 03:08:52.896935] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:49.478 [2024-11-18 03:08:52.897144] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:49.478 [2024-11-18 03:08:52.897162] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:49.478 [2024-11-18 03:08:52.897446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:49.478 [2024-11-18 03:08:52.897580] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:49.478 [2024-11-18 03:08:52.897589] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:49.478 [2024-11-18 03:08:52.897736] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.478 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.478 03:08:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:49.478 03:08:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:49.478 03:08:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.478 03:08:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.478 03:08:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.478 03:08:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.478 03:08:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.478 03:08:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.478 03:08:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.478 03:08:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.478 03:08:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.478 03:08:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:49.478 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.478 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.478 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.478 03:08:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.478 "name": "raid_bdev1", 00:08:49.478 "uuid": "58cdfb47-0786-471a-8f2a-32514aec35cf", 00:08:49.478 "strip_size_kb": 64, 00:08:49.478 "state": "online", 00:08:49.478 "raid_level": "raid0", 00:08:49.478 "superblock": true, 00:08:49.478 "num_base_bdevs": 3, 00:08:49.478 "num_base_bdevs_discovered": 3, 00:08:49.478 "num_base_bdevs_operational": 3, 00:08:49.478 "base_bdevs_list": [ 00:08:49.478 { 00:08:49.478 "name": "BaseBdev1", 00:08:49.478 "uuid": "f758c3ea-9265-5f63-8bde-f83e0318238a", 00:08:49.478 "is_configured": true, 00:08:49.478 "data_offset": 2048, 00:08:49.478 "data_size": 63488 00:08:49.478 }, 00:08:49.478 { 00:08:49.478 "name": "BaseBdev2", 00:08:49.478 "uuid": "5021ea95-3c2d-56d8-9b3d-3cf08d99c635", 00:08:49.478 "is_configured": true, 00:08:49.478 "data_offset": 2048, 00:08:49.478 "data_size": 63488 00:08:49.478 }, 00:08:49.478 { 00:08:49.478 "name": "BaseBdev3", 00:08:49.478 "uuid": "6bbe5f4f-5d24-5b2c-8d11-3276183032c3", 00:08:49.478 "is_configured": true, 00:08:49.478 "data_offset": 2048, 00:08:49.478 "data_size": 63488 00:08:49.478 } 00:08:49.478 ] 00:08:49.478 }' 00:08:49.478 03:08:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.478 03:08:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.047 03:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:50.047 03:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:50.047 [2024-11-18 03:08:53.454168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:50.985 03:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:50.985 03:08:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.985 03:08:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.985 03:08:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.985 03:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:50.985 03:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:50.985 03:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:50.985 03:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:50.985 03:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:50.985 03:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.985 03:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.985 03:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.985 03:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.985 03:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.985 03:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.985 03:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.985 03:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.985 03:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.986 03:08:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.986 03:08:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.986 03:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:50.986 03:08:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.986 03:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.986 "name": "raid_bdev1", 00:08:50.986 "uuid": "58cdfb47-0786-471a-8f2a-32514aec35cf", 00:08:50.986 "strip_size_kb": 64, 00:08:50.986 "state": "online", 00:08:50.986 "raid_level": "raid0", 00:08:50.986 "superblock": true, 00:08:50.986 "num_base_bdevs": 3, 00:08:50.986 "num_base_bdevs_discovered": 3, 00:08:50.986 "num_base_bdevs_operational": 3, 00:08:50.986 "base_bdevs_list": [ 00:08:50.986 { 00:08:50.986 "name": "BaseBdev1", 00:08:50.986 "uuid": "f758c3ea-9265-5f63-8bde-f83e0318238a", 00:08:50.986 "is_configured": true, 00:08:50.986 "data_offset": 2048, 00:08:50.986 "data_size": 63488 00:08:50.986 }, 00:08:50.986 { 00:08:50.986 "name": "BaseBdev2", 00:08:50.986 "uuid": "5021ea95-3c2d-56d8-9b3d-3cf08d99c635", 00:08:50.986 "is_configured": true, 00:08:50.986 "data_offset": 2048, 00:08:50.986 "data_size": 63488 00:08:50.986 }, 00:08:50.986 { 00:08:50.986 "name": "BaseBdev3", 00:08:50.986 "uuid": "6bbe5f4f-5d24-5b2c-8d11-3276183032c3", 00:08:50.986 "is_configured": true, 00:08:50.986 "data_offset": 2048, 00:08:50.986 "data_size": 63488 00:08:50.986 } 00:08:50.986 ] 00:08:50.986 }' 00:08:50.986 03:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.986 03:08:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.244 03:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:51.244 03:08:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.503 03:08:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.503 [2024-11-18 03:08:54.826452] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:51.503 [2024-11-18 03:08:54.826556] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.503 [2024-11-18 03:08:54.829390] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.503 [2024-11-18 03:08:54.829443] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.503 [2024-11-18 03:08:54.829480] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:51.503 [2024-11-18 03:08:54.829493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:51.503 { 00:08:51.503 "results": [ 00:08:51.503 { 00:08:51.503 "job": "raid_bdev1", 00:08:51.503 "core_mask": "0x1", 00:08:51.503 "workload": "randrw", 00:08:51.503 "percentage": 50, 00:08:51.503 "status": "finished", 00:08:51.503 "queue_depth": 1, 00:08:51.503 "io_size": 131072, 00:08:51.503 "runtime": 1.373052, 00:08:51.503 "iops": 15762.695076370013, 00:08:51.503 "mibps": 1970.3368845462517, 00:08:51.503 "io_failed": 1, 00:08:51.503 "io_timeout": 0, 00:08:51.503 "avg_latency_us": 87.92303047568474, 00:08:51.503 "min_latency_us": 20.234061135371178, 00:08:51.503 "max_latency_us": 1638.4 00:08:51.503 } 00:08:51.503 ], 00:08:51.503 "core_count": 1 00:08:51.503 } 00:08:51.503 03:08:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.503 03:08:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76761 00:08:51.503 03:08:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 76761 ']' 00:08:51.503 03:08:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 76761 00:08:51.503 03:08:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:51.503 03:08:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:51.503 03:08:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76761 00:08:51.503 killing process with pid 76761 00:08:51.503 03:08:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:51.503 03:08:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:51.503 03:08:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76761' 00:08:51.503 03:08:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 76761 00:08:51.503 [2024-11-18 03:08:54.866045] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:51.503 03:08:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 76761 00:08:51.503 [2024-11-18 03:08:54.892510] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:51.763 03:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Nqv09Kf6t6 00:08:51.763 03:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:51.763 03:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:51.763 ************************************ 00:08:51.763 END TEST raid_write_error_test 00:08:51.763 ************************************ 00:08:51.763 03:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:51.763 03:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:51.763 03:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:51.763 03:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:51.763 03:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:51.763 00:08:51.763 real 0m3.348s 00:08:51.763 user 0m4.251s 00:08:51.763 sys 0m0.529s 00:08:51.763 03:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.763 03:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.763 03:08:55 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:51.763 03:08:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:51.763 03:08:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:51.763 03:08:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.763 03:08:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:51.763 ************************************ 00:08:51.763 START TEST raid_state_function_test 00:08:51.763 ************************************ 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=76894 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76894' 00:08:51.763 Process raid pid: 76894 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 76894 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 76894 ']' 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:51.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:51.763 03:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.763 [2024-11-18 03:08:55.290966] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:51.763 [2024-11-18 03:08:55.291130] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.023 [2024-11-18 03:08:55.453048] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.023 [2024-11-18 03:08:55.504521] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.023 [2024-11-18 03:08:55.548388] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.023 [2024-11-18 03:08:55.548424] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.593 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:52.593 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:52.593 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:52.593 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.593 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.593 [2024-11-18 03:08:56.162841] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:52.593 [2024-11-18 03:08:56.162901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:52.593 [2024-11-18 03:08:56.162918] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:52.593 [2024-11-18 03:08:56.162931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:52.593 [2024-11-18 03:08:56.162938] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:52.593 [2024-11-18 03:08:56.162951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:52.851 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.851 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:52.851 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.851 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.851 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.851 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.851 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.851 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.851 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.851 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.851 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.851 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.851 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.851 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.851 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.851 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.851 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.852 "name": "Existed_Raid", 00:08:52.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.852 "strip_size_kb": 64, 00:08:52.852 "state": "configuring", 00:08:52.852 "raid_level": "concat", 00:08:52.852 "superblock": false, 00:08:52.852 "num_base_bdevs": 3, 00:08:52.852 "num_base_bdevs_discovered": 0, 00:08:52.852 "num_base_bdevs_operational": 3, 00:08:52.852 "base_bdevs_list": [ 00:08:52.852 { 00:08:52.852 "name": "BaseBdev1", 00:08:52.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.852 "is_configured": false, 00:08:52.852 "data_offset": 0, 00:08:52.852 "data_size": 0 00:08:52.852 }, 00:08:52.852 { 00:08:52.852 "name": "BaseBdev2", 00:08:52.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.852 "is_configured": false, 00:08:52.852 "data_offset": 0, 00:08:52.852 "data_size": 0 00:08:52.852 }, 00:08:52.852 { 00:08:52.852 "name": "BaseBdev3", 00:08:52.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.852 "is_configured": false, 00:08:52.852 "data_offset": 0, 00:08:52.852 "data_size": 0 00:08:52.852 } 00:08:52.852 ] 00:08:52.852 }' 00:08:52.852 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.852 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.110 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:53.110 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.110 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.110 [2024-11-18 03:08:56.617982] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:53.111 [2024-11-18 03:08:56.618113] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:53.111 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.111 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:53.111 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.111 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.111 [2024-11-18 03:08:56.630000] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:53.111 [2024-11-18 03:08:56.630055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:53.111 [2024-11-18 03:08:56.630065] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:53.111 [2024-11-18 03:08:56.630076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:53.111 [2024-11-18 03:08:56.630083] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:53.111 [2024-11-18 03:08:56.630093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:53.111 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.111 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:53.111 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.111 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.111 [2024-11-18 03:08:56.651425] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.111 BaseBdev1 00:08:53.111 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.111 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:53.111 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:53.111 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:53.111 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:53.111 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:53.111 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:53.111 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:53.111 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.111 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.111 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.111 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:53.111 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.111 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.111 [ 00:08:53.111 { 00:08:53.111 "name": "BaseBdev1", 00:08:53.111 "aliases": [ 00:08:53.111 "de4913de-85a1-47c0-be19-483e04091524" 00:08:53.111 ], 00:08:53.111 "product_name": "Malloc disk", 00:08:53.111 "block_size": 512, 00:08:53.111 "num_blocks": 65536, 00:08:53.111 "uuid": "de4913de-85a1-47c0-be19-483e04091524", 00:08:53.111 "assigned_rate_limits": { 00:08:53.111 "rw_ios_per_sec": 0, 00:08:53.111 "rw_mbytes_per_sec": 0, 00:08:53.111 "r_mbytes_per_sec": 0, 00:08:53.111 "w_mbytes_per_sec": 0 00:08:53.111 }, 00:08:53.111 "claimed": true, 00:08:53.111 "claim_type": "exclusive_write", 00:08:53.111 "zoned": false, 00:08:53.111 "supported_io_types": { 00:08:53.111 "read": true, 00:08:53.111 "write": true, 00:08:53.111 "unmap": true, 00:08:53.111 "flush": true, 00:08:53.111 "reset": true, 00:08:53.111 "nvme_admin": false, 00:08:53.111 "nvme_io": false, 00:08:53.111 "nvme_io_md": false, 00:08:53.111 "write_zeroes": true, 00:08:53.111 "zcopy": true, 00:08:53.111 "get_zone_info": false, 00:08:53.111 "zone_management": false, 00:08:53.111 "zone_append": false, 00:08:53.111 "compare": false, 00:08:53.111 "compare_and_write": false, 00:08:53.111 "abort": true, 00:08:53.111 "seek_hole": false, 00:08:53.370 "seek_data": false, 00:08:53.370 "copy": true, 00:08:53.370 "nvme_iov_md": false 00:08:53.370 }, 00:08:53.370 "memory_domains": [ 00:08:53.370 { 00:08:53.370 "dma_device_id": "system", 00:08:53.370 "dma_device_type": 1 00:08:53.370 }, 00:08:53.370 { 00:08:53.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.370 "dma_device_type": 2 00:08:53.370 } 00:08:53.370 ], 00:08:53.370 "driver_specific": {} 00:08:53.370 } 00:08:53.370 ] 00:08:53.370 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.370 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:53.370 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:53.370 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.370 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.370 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.370 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.370 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.370 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.370 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.370 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.370 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.370 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.370 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.370 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.370 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.370 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.370 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.370 "name": "Existed_Raid", 00:08:53.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.370 "strip_size_kb": 64, 00:08:53.370 "state": "configuring", 00:08:53.370 "raid_level": "concat", 00:08:53.370 "superblock": false, 00:08:53.370 "num_base_bdevs": 3, 00:08:53.370 "num_base_bdevs_discovered": 1, 00:08:53.370 "num_base_bdevs_operational": 3, 00:08:53.370 "base_bdevs_list": [ 00:08:53.370 { 00:08:53.370 "name": "BaseBdev1", 00:08:53.370 "uuid": "de4913de-85a1-47c0-be19-483e04091524", 00:08:53.370 "is_configured": true, 00:08:53.370 "data_offset": 0, 00:08:53.370 "data_size": 65536 00:08:53.370 }, 00:08:53.370 { 00:08:53.370 "name": "BaseBdev2", 00:08:53.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.370 "is_configured": false, 00:08:53.370 "data_offset": 0, 00:08:53.370 "data_size": 0 00:08:53.370 }, 00:08:53.370 { 00:08:53.370 "name": "BaseBdev3", 00:08:53.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.370 "is_configured": false, 00:08:53.370 "data_offset": 0, 00:08:53.370 "data_size": 0 00:08:53.370 } 00:08:53.370 ] 00:08:53.370 }' 00:08:53.370 03:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.370 03:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.629 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:53.629 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.629 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.629 [2024-11-18 03:08:57.142725] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:53.629 [2024-11-18 03:08:57.142868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:53.629 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.629 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:53.629 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.629 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.629 [2024-11-18 03:08:57.154737] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.629 [2024-11-18 03:08:57.156821] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:53.629 [2024-11-18 03:08:57.156870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:53.629 [2024-11-18 03:08:57.156880] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:53.629 [2024-11-18 03:08:57.156891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:53.629 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.629 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:53.629 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:53.629 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:53.629 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.629 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.629 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.629 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.629 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.629 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.629 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.629 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.629 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.629 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.629 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.629 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.629 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.629 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.889 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.889 "name": "Existed_Raid", 00:08:53.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.889 "strip_size_kb": 64, 00:08:53.889 "state": "configuring", 00:08:53.889 "raid_level": "concat", 00:08:53.889 "superblock": false, 00:08:53.889 "num_base_bdevs": 3, 00:08:53.889 "num_base_bdevs_discovered": 1, 00:08:53.889 "num_base_bdevs_operational": 3, 00:08:53.889 "base_bdevs_list": [ 00:08:53.889 { 00:08:53.889 "name": "BaseBdev1", 00:08:53.889 "uuid": "de4913de-85a1-47c0-be19-483e04091524", 00:08:53.889 "is_configured": true, 00:08:53.889 "data_offset": 0, 00:08:53.889 "data_size": 65536 00:08:53.889 }, 00:08:53.889 { 00:08:53.889 "name": "BaseBdev2", 00:08:53.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.889 "is_configured": false, 00:08:53.889 "data_offset": 0, 00:08:53.889 "data_size": 0 00:08:53.889 }, 00:08:53.889 { 00:08:53.889 "name": "BaseBdev3", 00:08:53.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.889 "is_configured": false, 00:08:53.889 "data_offset": 0, 00:08:53.889 "data_size": 0 00:08:53.889 } 00:08:53.889 ] 00:08:53.889 }' 00:08:53.889 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.889 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.149 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:54.149 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.149 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.149 [2024-11-18 03:08:57.614271] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:54.149 BaseBdev2 00:08:54.149 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.149 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:54.149 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:54.149 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:54.149 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:54.149 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:54.149 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:54.149 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:54.149 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.149 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.149 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.149 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:54.149 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.149 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.149 [ 00:08:54.149 { 00:08:54.149 "name": "BaseBdev2", 00:08:54.149 "aliases": [ 00:08:54.149 "a464a856-d48e-4f37-b4ba-0e7022c91a6c" 00:08:54.149 ], 00:08:54.149 "product_name": "Malloc disk", 00:08:54.149 "block_size": 512, 00:08:54.149 "num_blocks": 65536, 00:08:54.149 "uuid": "a464a856-d48e-4f37-b4ba-0e7022c91a6c", 00:08:54.149 "assigned_rate_limits": { 00:08:54.149 "rw_ios_per_sec": 0, 00:08:54.149 "rw_mbytes_per_sec": 0, 00:08:54.149 "r_mbytes_per_sec": 0, 00:08:54.149 "w_mbytes_per_sec": 0 00:08:54.150 }, 00:08:54.150 "claimed": true, 00:08:54.150 "claim_type": "exclusive_write", 00:08:54.150 "zoned": false, 00:08:54.150 "supported_io_types": { 00:08:54.150 "read": true, 00:08:54.150 "write": true, 00:08:54.150 "unmap": true, 00:08:54.150 "flush": true, 00:08:54.150 "reset": true, 00:08:54.150 "nvme_admin": false, 00:08:54.150 "nvme_io": false, 00:08:54.150 "nvme_io_md": false, 00:08:54.150 "write_zeroes": true, 00:08:54.150 "zcopy": true, 00:08:54.150 "get_zone_info": false, 00:08:54.150 "zone_management": false, 00:08:54.150 "zone_append": false, 00:08:54.150 "compare": false, 00:08:54.150 "compare_and_write": false, 00:08:54.150 "abort": true, 00:08:54.150 "seek_hole": false, 00:08:54.150 "seek_data": false, 00:08:54.150 "copy": true, 00:08:54.150 "nvme_iov_md": false 00:08:54.150 }, 00:08:54.150 "memory_domains": [ 00:08:54.150 { 00:08:54.150 "dma_device_id": "system", 00:08:54.150 "dma_device_type": 1 00:08:54.150 }, 00:08:54.150 { 00:08:54.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.150 "dma_device_type": 2 00:08:54.150 } 00:08:54.150 ], 00:08:54.150 "driver_specific": {} 00:08:54.150 } 00:08:54.150 ] 00:08:54.150 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.150 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:54.150 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:54.150 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:54.150 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:54.150 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.150 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.150 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.150 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.150 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.150 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.150 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.150 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.150 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.150 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.150 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.150 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.150 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.150 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.150 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.150 "name": "Existed_Raid", 00:08:54.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.150 "strip_size_kb": 64, 00:08:54.150 "state": "configuring", 00:08:54.150 "raid_level": "concat", 00:08:54.150 "superblock": false, 00:08:54.150 "num_base_bdevs": 3, 00:08:54.150 "num_base_bdevs_discovered": 2, 00:08:54.150 "num_base_bdevs_operational": 3, 00:08:54.150 "base_bdevs_list": [ 00:08:54.150 { 00:08:54.150 "name": "BaseBdev1", 00:08:54.150 "uuid": "de4913de-85a1-47c0-be19-483e04091524", 00:08:54.150 "is_configured": true, 00:08:54.150 "data_offset": 0, 00:08:54.150 "data_size": 65536 00:08:54.150 }, 00:08:54.150 { 00:08:54.150 "name": "BaseBdev2", 00:08:54.150 "uuid": "a464a856-d48e-4f37-b4ba-0e7022c91a6c", 00:08:54.150 "is_configured": true, 00:08:54.150 "data_offset": 0, 00:08:54.150 "data_size": 65536 00:08:54.150 }, 00:08:54.150 { 00:08:54.150 "name": "BaseBdev3", 00:08:54.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.150 "is_configured": false, 00:08:54.150 "data_offset": 0, 00:08:54.150 "data_size": 0 00:08:54.150 } 00:08:54.150 ] 00:08:54.150 }' 00:08:54.150 03:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.150 03:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.721 [2024-11-18 03:08:58.080808] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:54.721 [2024-11-18 03:08:58.080945] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:54.721 [2024-11-18 03:08:58.080979] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:54.721 [2024-11-18 03:08:58.081335] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:54.721 [2024-11-18 03:08:58.081493] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:54.721 [2024-11-18 03:08:58.081505] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:54.721 [2024-11-18 03:08:58.081713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.721 BaseBdev3 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.721 [ 00:08:54.721 { 00:08:54.721 "name": "BaseBdev3", 00:08:54.721 "aliases": [ 00:08:54.721 "7b6f2a60-e073-4954-85af-669ff16422ab" 00:08:54.721 ], 00:08:54.721 "product_name": "Malloc disk", 00:08:54.721 "block_size": 512, 00:08:54.721 "num_blocks": 65536, 00:08:54.721 "uuid": "7b6f2a60-e073-4954-85af-669ff16422ab", 00:08:54.721 "assigned_rate_limits": { 00:08:54.721 "rw_ios_per_sec": 0, 00:08:54.721 "rw_mbytes_per_sec": 0, 00:08:54.721 "r_mbytes_per_sec": 0, 00:08:54.721 "w_mbytes_per_sec": 0 00:08:54.721 }, 00:08:54.721 "claimed": true, 00:08:54.721 "claim_type": "exclusive_write", 00:08:54.721 "zoned": false, 00:08:54.721 "supported_io_types": { 00:08:54.721 "read": true, 00:08:54.721 "write": true, 00:08:54.721 "unmap": true, 00:08:54.721 "flush": true, 00:08:54.721 "reset": true, 00:08:54.721 "nvme_admin": false, 00:08:54.721 "nvme_io": false, 00:08:54.721 "nvme_io_md": false, 00:08:54.721 "write_zeroes": true, 00:08:54.721 "zcopy": true, 00:08:54.721 "get_zone_info": false, 00:08:54.721 "zone_management": false, 00:08:54.721 "zone_append": false, 00:08:54.721 "compare": false, 00:08:54.721 "compare_and_write": false, 00:08:54.721 "abort": true, 00:08:54.721 "seek_hole": false, 00:08:54.721 "seek_data": false, 00:08:54.721 "copy": true, 00:08:54.721 "nvme_iov_md": false 00:08:54.721 }, 00:08:54.721 "memory_domains": [ 00:08:54.721 { 00:08:54.721 "dma_device_id": "system", 00:08:54.721 "dma_device_type": 1 00:08:54.721 }, 00:08:54.721 { 00:08:54.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.721 "dma_device_type": 2 00:08:54.721 } 00:08:54.721 ], 00:08:54.721 "driver_specific": {} 00:08:54.721 } 00:08:54.721 ] 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.721 "name": "Existed_Raid", 00:08:54.721 "uuid": "8f67ab31-ec56-4982-bf66-852e4efcd3fb", 00:08:54.721 "strip_size_kb": 64, 00:08:54.721 "state": "online", 00:08:54.721 "raid_level": "concat", 00:08:54.721 "superblock": false, 00:08:54.721 "num_base_bdevs": 3, 00:08:54.721 "num_base_bdevs_discovered": 3, 00:08:54.721 "num_base_bdevs_operational": 3, 00:08:54.721 "base_bdevs_list": [ 00:08:54.721 { 00:08:54.721 "name": "BaseBdev1", 00:08:54.721 "uuid": "de4913de-85a1-47c0-be19-483e04091524", 00:08:54.721 "is_configured": true, 00:08:54.721 "data_offset": 0, 00:08:54.721 "data_size": 65536 00:08:54.721 }, 00:08:54.721 { 00:08:54.721 "name": "BaseBdev2", 00:08:54.721 "uuid": "a464a856-d48e-4f37-b4ba-0e7022c91a6c", 00:08:54.721 "is_configured": true, 00:08:54.721 "data_offset": 0, 00:08:54.721 "data_size": 65536 00:08:54.721 }, 00:08:54.721 { 00:08:54.721 "name": "BaseBdev3", 00:08:54.721 "uuid": "7b6f2a60-e073-4954-85af-669ff16422ab", 00:08:54.721 "is_configured": true, 00:08:54.721 "data_offset": 0, 00:08:54.721 "data_size": 65536 00:08:54.721 } 00:08:54.721 ] 00:08:54.721 }' 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.721 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.291 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:55.291 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:55.291 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:55.291 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:55.291 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:55.291 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:55.291 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:55.291 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:55.291 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.291 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.291 [2024-11-18 03:08:58.572444] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:55.291 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.291 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:55.291 "name": "Existed_Raid", 00:08:55.291 "aliases": [ 00:08:55.291 "8f67ab31-ec56-4982-bf66-852e4efcd3fb" 00:08:55.291 ], 00:08:55.291 "product_name": "Raid Volume", 00:08:55.291 "block_size": 512, 00:08:55.291 "num_blocks": 196608, 00:08:55.291 "uuid": "8f67ab31-ec56-4982-bf66-852e4efcd3fb", 00:08:55.291 "assigned_rate_limits": { 00:08:55.291 "rw_ios_per_sec": 0, 00:08:55.291 "rw_mbytes_per_sec": 0, 00:08:55.291 "r_mbytes_per_sec": 0, 00:08:55.291 "w_mbytes_per_sec": 0 00:08:55.291 }, 00:08:55.291 "claimed": false, 00:08:55.291 "zoned": false, 00:08:55.291 "supported_io_types": { 00:08:55.291 "read": true, 00:08:55.291 "write": true, 00:08:55.291 "unmap": true, 00:08:55.291 "flush": true, 00:08:55.291 "reset": true, 00:08:55.291 "nvme_admin": false, 00:08:55.291 "nvme_io": false, 00:08:55.291 "nvme_io_md": false, 00:08:55.291 "write_zeroes": true, 00:08:55.291 "zcopy": false, 00:08:55.291 "get_zone_info": false, 00:08:55.291 "zone_management": false, 00:08:55.291 "zone_append": false, 00:08:55.291 "compare": false, 00:08:55.291 "compare_and_write": false, 00:08:55.291 "abort": false, 00:08:55.291 "seek_hole": false, 00:08:55.291 "seek_data": false, 00:08:55.291 "copy": false, 00:08:55.291 "nvme_iov_md": false 00:08:55.291 }, 00:08:55.291 "memory_domains": [ 00:08:55.291 { 00:08:55.291 "dma_device_id": "system", 00:08:55.291 "dma_device_type": 1 00:08:55.291 }, 00:08:55.291 { 00:08:55.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.291 "dma_device_type": 2 00:08:55.291 }, 00:08:55.291 { 00:08:55.291 "dma_device_id": "system", 00:08:55.291 "dma_device_type": 1 00:08:55.291 }, 00:08:55.291 { 00:08:55.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.291 "dma_device_type": 2 00:08:55.291 }, 00:08:55.291 { 00:08:55.291 "dma_device_id": "system", 00:08:55.291 "dma_device_type": 1 00:08:55.291 }, 00:08:55.291 { 00:08:55.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.291 "dma_device_type": 2 00:08:55.291 } 00:08:55.291 ], 00:08:55.291 "driver_specific": { 00:08:55.291 "raid": { 00:08:55.291 "uuid": "8f67ab31-ec56-4982-bf66-852e4efcd3fb", 00:08:55.291 "strip_size_kb": 64, 00:08:55.291 "state": "online", 00:08:55.291 "raid_level": "concat", 00:08:55.291 "superblock": false, 00:08:55.291 "num_base_bdevs": 3, 00:08:55.291 "num_base_bdevs_discovered": 3, 00:08:55.291 "num_base_bdevs_operational": 3, 00:08:55.291 "base_bdevs_list": [ 00:08:55.291 { 00:08:55.291 "name": "BaseBdev1", 00:08:55.291 "uuid": "de4913de-85a1-47c0-be19-483e04091524", 00:08:55.291 "is_configured": true, 00:08:55.291 "data_offset": 0, 00:08:55.291 "data_size": 65536 00:08:55.291 }, 00:08:55.291 { 00:08:55.291 "name": "BaseBdev2", 00:08:55.291 "uuid": "a464a856-d48e-4f37-b4ba-0e7022c91a6c", 00:08:55.291 "is_configured": true, 00:08:55.291 "data_offset": 0, 00:08:55.291 "data_size": 65536 00:08:55.291 }, 00:08:55.291 { 00:08:55.291 "name": "BaseBdev3", 00:08:55.291 "uuid": "7b6f2a60-e073-4954-85af-669ff16422ab", 00:08:55.292 "is_configured": true, 00:08:55.292 "data_offset": 0, 00:08:55.292 "data_size": 65536 00:08:55.292 } 00:08:55.292 ] 00:08:55.292 } 00:08:55.292 } 00:08:55.292 }' 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:55.292 BaseBdev2 00:08:55.292 BaseBdev3' 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.292 [2024-11-18 03:08:58.819785] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:55.292 [2024-11-18 03:08:58.819824] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.292 [2024-11-18 03:08:58.819896] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.292 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.552 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.552 "name": "Existed_Raid", 00:08:55.552 "uuid": "8f67ab31-ec56-4982-bf66-852e4efcd3fb", 00:08:55.552 "strip_size_kb": 64, 00:08:55.552 "state": "offline", 00:08:55.552 "raid_level": "concat", 00:08:55.552 "superblock": false, 00:08:55.552 "num_base_bdevs": 3, 00:08:55.552 "num_base_bdevs_discovered": 2, 00:08:55.552 "num_base_bdevs_operational": 2, 00:08:55.552 "base_bdevs_list": [ 00:08:55.552 { 00:08:55.552 "name": null, 00:08:55.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.552 "is_configured": false, 00:08:55.552 "data_offset": 0, 00:08:55.552 "data_size": 65536 00:08:55.552 }, 00:08:55.552 { 00:08:55.552 "name": "BaseBdev2", 00:08:55.552 "uuid": "a464a856-d48e-4f37-b4ba-0e7022c91a6c", 00:08:55.552 "is_configured": true, 00:08:55.552 "data_offset": 0, 00:08:55.552 "data_size": 65536 00:08:55.552 }, 00:08:55.552 { 00:08:55.552 "name": "BaseBdev3", 00:08:55.552 "uuid": "7b6f2a60-e073-4954-85af-669ff16422ab", 00:08:55.552 "is_configured": true, 00:08:55.552 "data_offset": 0, 00:08:55.552 "data_size": 65536 00:08:55.552 } 00:08:55.552 ] 00:08:55.552 }' 00:08:55.552 03:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.552 03:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.819 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:55.819 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:55.819 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:55.819 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.819 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.819 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.819 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.819 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:55.819 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:55.819 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:55.819 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.819 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.819 [2024-11-18 03:08:59.326865] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:55.819 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.819 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:55.819 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:55.819 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.819 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.819 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.819 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:55.819 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.081 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:56.081 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:56.081 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.082 [2024-11-18 03:08:59.398482] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:56.082 [2024-11-18 03:08:59.398541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.082 BaseBdev2 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.082 [ 00:08:56.082 { 00:08:56.082 "name": "BaseBdev2", 00:08:56.082 "aliases": [ 00:08:56.082 "a4e9591e-4111-4dfa-b8d2-e48b5f32d390" 00:08:56.082 ], 00:08:56.082 "product_name": "Malloc disk", 00:08:56.082 "block_size": 512, 00:08:56.082 "num_blocks": 65536, 00:08:56.082 "uuid": "a4e9591e-4111-4dfa-b8d2-e48b5f32d390", 00:08:56.082 "assigned_rate_limits": { 00:08:56.082 "rw_ios_per_sec": 0, 00:08:56.082 "rw_mbytes_per_sec": 0, 00:08:56.082 "r_mbytes_per_sec": 0, 00:08:56.082 "w_mbytes_per_sec": 0 00:08:56.082 }, 00:08:56.082 "claimed": false, 00:08:56.082 "zoned": false, 00:08:56.082 "supported_io_types": { 00:08:56.082 "read": true, 00:08:56.082 "write": true, 00:08:56.082 "unmap": true, 00:08:56.082 "flush": true, 00:08:56.082 "reset": true, 00:08:56.082 "nvme_admin": false, 00:08:56.082 "nvme_io": false, 00:08:56.082 "nvme_io_md": false, 00:08:56.082 "write_zeroes": true, 00:08:56.082 "zcopy": true, 00:08:56.082 "get_zone_info": false, 00:08:56.082 "zone_management": false, 00:08:56.082 "zone_append": false, 00:08:56.082 "compare": false, 00:08:56.082 "compare_and_write": false, 00:08:56.082 "abort": true, 00:08:56.082 "seek_hole": false, 00:08:56.082 "seek_data": false, 00:08:56.082 "copy": true, 00:08:56.082 "nvme_iov_md": false 00:08:56.082 }, 00:08:56.082 "memory_domains": [ 00:08:56.082 { 00:08:56.082 "dma_device_id": "system", 00:08:56.082 "dma_device_type": 1 00:08:56.082 }, 00:08:56.082 { 00:08:56.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.082 "dma_device_type": 2 00:08:56.082 } 00:08:56.082 ], 00:08:56.082 "driver_specific": {} 00:08:56.082 } 00:08:56.082 ] 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.082 BaseBdev3 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.082 [ 00:08:56.082 { 00:08:56.082 "name": "BaseBdev3", 00:08:56.082 "aliases": [ 00:08:56.082 "d9f67182-af18-4d03-a694-b426ad7919ae" 00:08:56.082 ], 00:08:56.082 "product_name": "Malloc disk", 00:08:56.082 "block_size": 512, 00:08:56.082 "num_blocks": 65536, 00:08:56.082 "uuid": "d9f67182-af18-4d03-a694-b426ad7919ae", 00:08:56.082 "assigned_rate_limits": { 00:08:56.082 "rw_ios_per_sec": 0, 00:08:56.082 "rw_mbytes_per_sec": 0, 00:08:56.082 "r_mbytes_per_sec": 0, 00:08:56.082 "w_mbytes_per_sec": 0 00:08:56.082 }, 00:08:56.082 "claimed": false, 00:08:56.082 "zoned": false, 00:08:56.082 "supported_io_types": { 00:08:56.082 "read": true, 00:08:56.082 "write": true, 00:08:56.082 "unmap": true, 00:08:56.082 "flush": true, 00:08:56.082 "reset": true, 00:08:56.082 "nvme_admin": false, 00:08:56.082 "nvme_io": false, 00:08:56.082 "nvme_io_md": false, 00:08:56.082 "write_zeroes": true, 00:08:56.082 "zcopy": true, 00:08:56.082 "get_zone_info": false, 00:08:56.082 "zone_management": false, 00:08:56.082 "zone_append": false, 00:08:56.082 "compare": false, 00:08:56.082 "compare_and_write": false, 00:08:56.082 "abort": true, 00:08:56.082 "seek_hole": false, 00:08:56.082 "seek_data": false, 00:08:56.082 "copy": true, 00:08:56.082 "nvme_iov_md": false 00:08:56.082 }, 00:08:56.082 "memory_domains": [ 00:08:56.082 { 00:08:56.082 "dma_device_id": "system", 00:08:56.082 "dma_device_type": 1 00:08:56.082 }, 00:08:56.082 { 00:08:56.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.082 "dma_device_type": 2 00:08:56.082 } 00:08:56.082 ], 00:08:56.082 "driver_specific": {} 00:08:56.082 } 00:08:56.082 ] 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.082 [2024-11-18 03:08:59.568598] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:56.082 [2024-11-18 03:08:59.568707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:56.082 [2024-11-18 03:08:59.568755] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:56.082 [2024-11-18 03:08:59.570798] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:56.082 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.083 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.083 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.083 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.083 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.083 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.083 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.083 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.083 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.083 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.083 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.083 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.083 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.083 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.083 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.083 "name": "Existed_Raid", 00:08:56.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.083 "strip_size_kb": 64, 00:08:56.083 "state": "configuring", 00:08:56.083 "raid_level": "concat", 00:08:56.083 "superblock": false, 00:08:56.083 "num_base_bdevs": 3, 00:08:56.083 "num_base_bdevs_discovered": 2, 00:08:56.083 "num_base_bdevs_operational": 3, 00:08:56.083 "base_bdevs_list": [ 00:08:56.083 { 00:08:56.083 "name": "BaseBdev1", 00:08:56.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.083 "is_configured": false, 00:08:56.083 "data_offset": 0, 00:08:56.083 "data_size": 0 00:08:56.083 }, 00:08:56.083 { 00:08:56.083 "name": "BaseBdev2", 00:08:56.083 "uuid": "a4e9591e-4111-4dfa-b8d2-e48b5f32d390", 00:08:56.083 "is_configured": true, 00:08:56.083 "data_offset": 0, 00:08:56.083 "data_size": 65536 00:08:56.083 }, 00:08:56.083 { 00:08:56.083 "name": "BaseBdev3", 00:08:56.083 "uuid": "d9f67182-af18-4d03-a694-b426ad7919ae", 00:08:56.083 "is_configured": true, 00:08:56.083 "data_offset": 0, 00:08:56.083 "data_size": 65536 00:08:56.083 } 00:08:56.083 ] 00:08:56.083 }' 00:08:56.083 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.083 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.655 03:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:56.655 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.655 03:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.655 [2024-11-18 03:09:00.003826] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:56.655 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.655 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:56.655 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.655 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.655 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.655 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.655 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.655 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.655 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.655 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.655 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.655 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.655 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.655 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.655 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.655 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.655 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.655 "name": "Existed_Raid", 00:08:56.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.655 "strip_size_kb": 64, 00:08:56.655 "state": "configuring", 00:08:56.655 "raid_level": "concat", 00:08:56.655 "superblock": false, 00:08:56.655 "num_base_bdevs": 3, 00:08:56.655 "num_base_bdevs_discovered": 1, 00:08:56.655 "num_base_bdevs_operational": 3, 00:08:56.655 "base_bdevs_list": [ 00:08:56.655 { 00:08:56.655 "name": "BaseBdev1", 00:08:56.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.655 "is_configured": false, 00:08:56.655 "data_offset": 0, 00:08:56.655 "data_size": 0 00:08:56.655 }, 00:08:56.655 { 00:08:56.655 "name": null, 00:08:56.655 "uuid": "a4e9591e-4111-4dfa-b8d2-e48b5f32d390", 00:08:56.655 "is_configured": false, 00:08:56.655 "data_offset": 0, 00:08:56.655 "data_size": 65536 00:08:56.655 }, 00:08:56.655 { 00:08:56.655 "name": "BaseBdev3", 00:08:56.655 "uuid": "d9f67182-af18-4d03-a694-b426ad7919ae", 00:08:56.655 "is_configured": true, 00:08:56.655 "data_offset": 0, 00:08:56.655 "data_size": 65536 00:08:56.655 } 00:08:56.655 ] 00:08:56.655 }' 00:08:56.655 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.655 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.915 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:56.915 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.915 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.915 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.915 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.915 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:56.915 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:56.915 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.915 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.175 [2024-11-18 03:09:00.502330] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:57.175 BaseBdev1 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.176 [ 00:08:57.176 { 00:08:57.176 "name": "BaseBdev1", 00:08:57.176 "aliases": [ 00:08:57.176 "887be09a-d057-4b94-9a4c-4381534655bf" 00:08:57.176 ], 00:08:57.176 "product_name": "Malloc disk", 00:08:57.176 "block_size": 512, 00:08:57.176 "num_blocks": 65536, 00:08:57.176 "uuid": "887be09a-d057-4b94-9a4c-4381534655bf", 00:08:57.176 "assigned_rate_limits": { 00:08:57.176 "rw_ios_per_sec": 0, 00:08:57.176 "rw_mbytes_per_sec": 0, 00:08:57.176 "r_mbytes_per_sec": 0, 00:08:57.176 "w_mbytes_per_sec": 0 00:08:57.176 }, 00:08:57.176 "claimed": true, 00:08:57.176 "claim_type": "exclusive_write", 00:08:57.176 "zoned": false, 00:08:57.176 "supported_io_types": { 00:08:57.176 "read": true, 00:08:57.176 "write": true, 00:08:57.176 "unmap": true, 00:08:57.176 "flush": true, 00:08:57.176 "reset": true, 00:08:57.176 "nvme_admin": false, 00:08:57.176 "nvme_io": false, 00:08:57.176 "nvme_io_md": false, 00:08:57.176 "write_zeroes": true, 00:08:57.176 "zcopy": true, 00:08:57.176 "get_zone_info": false, 00:08:57.176 "zone_management": false, 00:08:57.176 "zone_append": false, 00:08:57.176 "compare": false, 00:08:57.176 "compare_and_write": false, 00:08:57.176 "abort": true, 00:08:57.176 "seek_hole": false, 00:08:57.176 "seek_data": false, 00:08:57.176 "copy": true, 00:08:57.176 "nvme_iov_md": false 00:08:57.176 }, 00:08:57.176 "memory_domains": [ 00:08:57.176 { 00:08:57.176 "dma_device_id": "system", 00:08:57.176 "dma_device_type": 1 00:08:57.176 }, 00:08:57.176 { 00:08:57.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.176 "dma_device_type": 2 00:08:57.176 } 00:08:57.176 ], 00:08:57.176 "driver_specific": {} 00:08:57.176 } 00:08:57.176 ] 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.176 "name": "Existed_Raid", 00:08:57.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.176 "strip_size_kb": 64, 00:08:57.176 "state": "configuring", 00:08:57.176 "raid_level": "concat", 00:08:57.176 "superblock": false, 00:08:57.176 "num_base_bdevs": 3, 00:08:57.176 "num_base_bdevs_discovered": 2, 00:08:57.176 "num_base_bdevs_operational": 3, 00:08:57.176 "base_bdevs_list": [ 00:08:57.176 { 00:08:57.176 "name": "BaseBdev1", 00:08:57.176 "uuid": "887be09a-d057-4b94-9a4c-4381534655bf", 00:08:57.176 "is_configured": true, 00:08:57.176 "data_offset": 0, 00:08:57.176 "data_size": 65536 00:08:57.176 }, 00:08:57.176 { 00:08:57.176 "name": null, 00:08:57.176 "uuid": "a4e9591e-4111-4dfa-b8d2-e48b5f32d390", 00:08:57.176 "is_configured": false, 00:08:57.176 "data_offset": 0, 00:08:57.176 "data_size": 65536 00:08:57.176 }, 00:08:57.176 { 00:08:57.176 "name": "BaseBdev3", 00:08:57.176 "uuid": "d9f67182-af18-4d03-a694-b426ad7919ae", 00:08:57.176 "is_configured": true, 00:08:57.176 "data_offset": 0, 00:08:57.176 "data_size": 65536 00:08:57.176 } 00:08:57.176 ] 00:08:57.176 }' 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.176 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.435 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:57.436 03:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.436 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.436 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.436 03:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.696 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:57.696 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:57.696 03:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.696 03:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.696 [2024-11-18 03:09:01.025512] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:57.696 03:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.696 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:57.696 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.696 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.696 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.696 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.696 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.696 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.696 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.696 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.696 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.696 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.696 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.696 03:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.696 03:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.696 03:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.696 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.696 "name": "Existed_Raid", 00:08:57.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.696 "strip_size_kb": 64, 00:08:57.696 "state": "configuring", 00:08:57.696 "raid_level": "concat", 00:08:57.696 "superblock": false, 00:08:57.696 "num_base_bdevs": 3, 00:08:57.696 "num_base_bdevs_discovered": 1, 00:08:57.696 "num_base_bdevs_operational": 3, 00:08:57.696 "base_bdevs_list": [ 00:08:57.696 { 00:08:57.696 "name": "BaseBdev1", 00:08:57.696 "uuid": "887be09a-d057-4b94-9a4c-4381534655bf", 00:08:57.696 "is_configured": true, 00:08:57.696 "data_offset": 0, 00:08:57.696 "data_size": 65536 00:08:57.696 }, 00:08:57.696 { 00:08:57.696 "name": null, 00:08:57.696 "uuid": "a4e9591e-4111-4dfa-b8d2-e48b5f32d390", 00:08:57.696 "is_configured": false, 00:08:57.696 "data_offset": 0, 00:08:57.696 "data_size": 65536 00:08:57.696 }, 00:08:57.696 { 00:08:57.696 "name": null, 00:08:57.696 "uuid": "d9f67182-af18-4d03-a694-b426ad7919ae", 00:08:57.696 "is_configured": false, 00:08:57.696 "data_offset": 0, 00:08:57.696 "data_size": 65536 00:08:57.696 } 00:08:57.696 ] 00:08:57.696 }' 00:08:57.696 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.696 03:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.957 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.957 03:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.957 03:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.957 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:57.957 03:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.957 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:57.957 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:57.957 03:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.957 03:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.957 [2024-11-18 03:09:01.512721] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:57.957 03:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.957 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:57.957 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.957 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.957 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.957 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.957 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.957 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.957 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.957 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.957 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.957 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.957 03:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.957 03:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.957 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.216 03:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.216 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.216 "name": "Existed_Raid", 00:08:58.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.216 "strip_size_kb": 64, 00:08:58.216 "state": "configuring", 00:08:58.216 "raid_level": "concat", 00:08:58.216 "superblock": false, 00:08:58.216 "num_base_bdevs": 3, 00:08:58.216 "num_base_bdevs_discovered": 2, 00:08:58.216 "num_base_bdevs_operational": 3, 00:08:58.216 "base_bdevs_list": [ 00:08:58.216 { 00:08:58.216 "name": "BaseBdev1", 00:08:58.216 "uuid": "887be09a-d057-4b94-9a4c-4381534655bf", 00:08:58.216 "is_configured": true, 00:08:58.216 "data_offset": 0, 00:08:58.216 "data_size": 65536 00:08:58.216 }, 00:08:58.216 { 00:08:58.216 "name": null, 00:08:58.216 "uuid": "a4e9591e-4111-4dfa-b8d2-e48b5f32d390", 00:08:58.216 "is_configured": false, 00:08:58.216 "data_offset": 0, 00:08:58.216 "data_size": 65536 00:08:58.216 }, 00:08:58.216 { 00:08:58.216 "name": "BaseBdev3", 00:08:58.216 "uuid": "d9f67182-af18-4d03-a694-b426ad7919ae", 00:08:58.216 "is_configured": true, 00:08:58.216 "data_offset": 0, 00:08:58.216 "data_size": 65536 00:08:58.216 } 00:08:58.216 ] 00:08:58.216 }' 00:08:58.216 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.216 03:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.476 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.476 03:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.476 03:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.476 03:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:58.476 03:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.476 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:58.476 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:58.476 03:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.476 03:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.476 [2024-11-18 03:09:02.027832] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:58.476 03:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.476 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:58.476 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.476 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.476 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.476 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.476 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.476 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.476 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.476 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.476 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.476 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.476 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.476 03:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.476 03:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.735 03:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.735 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.735 "name": "Existed_Raid", 00:08:58.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.735 "strip_size_kb": 64, 00:08:58.735 "state": "configuring", 00:08:58.735 "raid_level": "concat", 00:08:58.735 "superblock": false, 00:08:58.735 "num_base_bdevs": 3, 00:08:58.735 "num_base_bdevs_discovered": 1, 00:08:58.735 "num_base_bdevs_operational": 3, 00:08:58.735 "base_bdevs_list": [ 00:08:58.735 { 00:08:58.735 "name": null, 00:08:58.735 "uuid": "887be09a-d057-4b94-9a4c-4381534655bf", 00:08:58.735 "is_configured": false, 00:08:58.735 "data_offset": 0, 00:08:58.735 "data_size": 65536 00:08:58.735 }, 00:08:58.735 { 00:08:58.735 "name": null, 00:08:58.735 "uuid": "a4e9591e-4111-4dfa-b8d2-e48b5f32d390", 00:08:58.735 "is_configured": false, 00:08:58.735 "data_offset": 0, 00:08:58.735 "data_size": 65536 00:08:58.735 }, 00:08:58.735 { 00:08:58.735 "name": "BaseBdev3", 00:08:58.735 "uuid": "d9f67182-af18-4d03-a694-b426ad7919ae", 00:08:58.735 "is_configured": true, 00:08:58.735 "data_offset": 0, 00:08:58.735 "data_size": 65536 00:08:58.735 } 00:08:58.735 ] 00:08:58.735 }' 00:08:58.735 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.735 03:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.996 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.996 03:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.996 03:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.996 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:58.996 03:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.996 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:58.996 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:58.996 03:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.996 03:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.996 [2024-11-18 03:09:02.549796] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:58.996 03:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.996 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:58.996 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.996 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.996 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.996 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.996 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.996 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.996 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.996 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.996 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.996 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.996 03:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.996 03:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.996 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.996 03:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.255 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.255 "name": "Existed_Raid", 00:08:59.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.255 "strip_size_kb": 64, 00:08:59.255 "state": "configuring", 00:08:59.255 "raid_level": "concat", 00:08:59.255 "superblock": false, 00:08:59.255 "num_base_bdevs": 3, 00:08:59.255 "num_base_bdevs_discovered": 2, 00:08:59.255 "num_base_bdevs_operational": 3, 00:08:59.255 "base_bdevs_list": [ 00:08:59.255 { 00:08:59.255 "name": null, 00:08:59.255 "uuid": "887be09a-d057-4b94-9a4c-4381534655bf", 00:08:59.255 "is_configured": false, 00:08:59.255 "data_offset": 0, 00:08:59.255 "data_size": 65536 00:08:59.255 }, 00:08:59.255 { 00:08:59.255 "name": "BaseBdev2", 00:08:59.255 "uuid": "a4e9591e-4111-4dfa-b8d2-e48b5f32d390", 00:08:59.255 "is_configured": true, 00:08:59.255 "data_offset": 0, 00:08:59.255 "data_size": 65536 00:08:59.255 }, 00:08:59.255 { 00:08:59.255 "name": "BaseBdev3", 00:08:59.255 "uuid": "d9f67182-af18-4d03-a694-b426ad7919ae", 00:08:59.255 "is_configured": true, 00:08:59.255 "data_offset": 0, 00:08:59.255 "data_size": 65536 00:08:59.255 } 00:08:59.255 ] 00:08:59.255 }' 00:08:59.255 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.256 03:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.515 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.515 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:59.515 03:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.515 03:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.515 03:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.515 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:59.515 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.515 03:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:59.515 03:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.515 03:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.515 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.515 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 887be09a-d057-4b94-9a4c-4381534655bf 00:08:59.515 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.515 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.515 [2024-11-18 03:09:03.056310] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:59.515 [2024-11-18 03:09:03.056358] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:59.515 [2024-11-18 03:09:03.056368] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:59.515 [2024-11-18 03:09:03.056611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:59.515 [2024-11-18 03:09:03.056721] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:59.515 [2024-11-18 03:09:03.056731] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:59.515 [2024-11-18 03:09:03.056922] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.515 NewBaseBdev 00:08:59.515 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.515 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:59.515 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:59.515 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:59.515 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:59.515 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:59.515 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:59.515 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:59.515 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.515 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.515 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.516 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:59.516 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.516 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.775 [ 00:08:59.775 { 00:08:59.775 "name": "NewBaseBdev", 00:08:59.775 "aliases": [ 00:08:59.775 "887be09a-d057-4b94-9a4c-4381534655bf" 00:08:59.775 ], 00:08:59.775 "product_name": "Malloc disk", 00:08:59.775 "block_size": 512, 00:08:59.775 "num_blocks": 65536, 00:08:59.775 "uuid": "887be09a-d057-4b94-9a4c-4381534655bf", 00:08:59.775 "assigned_rate_limits": { 00:08:59.775 "rw_ios_per_sec": 0, 00:08:59.775 "rw_mbytes_per_sec": 0, 00:08:59.775 "r_mbytes_per_sec": 0, 00:08:59.775 "w_mbytes_per_sec": 0 00:08:59.775 }, 00:08:59.775 "claimed": true, 00:08:59.775 "claim_type": "exclusive_write", 00:08:59.775 "zoned": false, 00:08:59.775 "supported_io_types": { 00:08:59.775 "read": true, 00:08:59.775 "write": true, 00:08:59.775 "unmap": true, 00:08:59.775 "flush": true, 00:08:59.775 "reset": true, 00:08:59.775 "nvme_admin": false, 00:08:59.775 "nvme_io": false, 00:08:59.775 "nvme_io_md": false, 00:08:59.775 "write_zeroes": true, 00:08:59.775 "zcopy": true, 00:08:59.775 "get_zone_info": false, 00:08:59.775 "zone_management": false, 00:08:59.775 "zone_append": false, 00:08:59.775 "compare": false, 00:08:59.775 "compare_and_write": false, 00:08:59.775 "abort": true, 00:08:59.775 "seek_hole": false, 00:08:59.775 "seek_data": false, 00:08:59.775 "copy": true, 00:08:59.775 "nvme_iov_md": false 00:08:59.775 }, 00:08:59.775 "memory_domains": [ 00:08:59.775 { 00:08:59.775 "dma_device_id": "system", 00:08:59.775 "dma_device_type": 1 00:08:59.775 }, 00:08:59.775 { 00:08:59.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.775 "dma_device_type": 2 00:08:59.775 } 00:08:59.775 ], 00:08:59.775 "driver_specific": {} 00:08:59.775 } 00:08:59.775 ] 00:08:59.775 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.775 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:59.776 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:59.776 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.776 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.776 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.776 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.776 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.776 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.776 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.776 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.776 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.776 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.776 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.776 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.776 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.776 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.776 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.776 "name": "Existed_Raid", 00:08:59.776 "uuid": "0cd7b7d0-4493-4128-a908-2cb9a37a9678", 00:08:59.776 "strip_size_kb": 64, 00:08:59.776 "state": "online", 00:08:59.776 "raid_level": "concat", 00:08:59.776 "superblock": false, 00:08:59.776 "num_base_bdevs": 3, 00:08:59.776 "num_base_bdevs_discovered": 3, 00:08:59.776 "num_base_bdevs_operational": 3, 00:08:59.776 "base_bdevs_list": [ 00:08:59.776 { 00:08:59.776 "name": "NewBaseBdev", 00:08:59.776 "uuid": "887be09a-d057-4b94-9a4c-4381534655bf", 00:08:59.776 "is_configured": true, 00:08:59.776 "data_offset": 0, 00:08:59.776 "data_size": 65536 00:08:59.776 }, 00:08:59.776 { 00:08:59.776 "name": "BaseBdev2", 00:08:59.776 "uuid": "a4e9591e-4111-4dfa-b8d2-e48b5f32d390", 00:08:59.776 "is_configured": true, 00:08:59.776 "data_offset": 0, 00:08:59.776 "data_size": 65536 00:08:59.776 }, 00:08:59.776 { 00:08:59.776 "name": "BaseBdev3", 00:08:59.776 "uuid": "d9f67182-af18-4d03-a694-b426ad7919ae", 00:08:59.776 "is_configured": true, 00:08:59.776 "data_offset": 0, 00:08:59.776 "data_size": 65536 00:08:59.776 } 00:08:59.776 ] 00:08:59.776 }' 00:08:59.776 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.776 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.036 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:00.036 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:00.036 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:00.036 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:00.036 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:00.036 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:00.036 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:00.036 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.036 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.036 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:00.036 [2024-11-18 03:09:03.531894] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:00.036 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.036 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:00.036 "name": "Existed_Raid", 00:09:00.036 "aliases": [ 00:09:00.036 "0cd7b7d0-4493-4128-a908-2cb9a37a9678" 00:09:00.036 ], 00:09:00.036 "product_name": "Raid Volume", 00:09:00.036 "block_size": 512, 00:09:00.036 "num_blocks": 196608, 00:09:00.036 "uuid": "0cd7b7d0-4493-4128-a908-2cb9a37a9678", 00:09:00.036 "assigned_rate_limits": { 00:09:00.036 "rw_ios_per_sec": 0, 00:09:00.036 "rw_mbytes_per_sec": 0, 00:09:00.036 "r_mbytes_per_sec": 0, 00:09:00.036 "w_mbytes_per_sec": 0 00:09:00.036 }, 00:09:00.036 "claimed": false, 00:09:00.036 "zoned": false, 00:09:00.036 "supported_io_types": { 00:09:00.036 "read": true, 00:09:00.036 "write": true, 00:09:00.036 "unmap": true, 00:09:00.036 "flush": true, 00:09:00.036 "reset": true, 00:09:00.036 "nvme_admin": false, 00:09:00.036 "nvme_io": false, 00:09:00.036 "nvme_io_md": false, 00:09:00.036 "write_zeroes": true, 00:09:00.036 "zcopy": false, 00:09:00.036 "get_zone_info": false, 00:09:00.036 "zone_management": false, 00:09:00.036 "zone_append": false, 00:09:00.036 "compare": false, 00:09:00.036 "compare_and_write": false, 00:09:00.036 "abort": false, 00:09:00.036 "seek_hole": false, 00:09:00.036 "seek_data": false, 00:09:00.036 "copy": false, 00:09:00.036 "nvme_iov_md": false 00:09:00.036 }, 00:09:00.036 "memory_domains": [ 00:09:00.036 { 00:09:00.036 "dma_device_id": "system", 00:09:00.036 "dma_device_type": 1 00:09:00.036 }, 00:09:00.036 { 00:09:00.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.036 "dma_device_type": 2 00:09:00.036 }, 00:09:00.036 { 00:09:00.036 "dma_device_id": "system", 00:09:00.036 "dma_device_type": 1 00:09:00.036 }, 00:09:00.036 { 00:09:00.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.036 "dma_device_type": 2 00:09:00.036 }, 00:09:00.036 { 00:09:00.036 "dma_device_id": "system", 00:09:00.036 "dma_device_type": 1 00:09:00.036 }, 00:09:00.036 { 00:09:00.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.036 "dma_device_type": 2 00:09:00.036 } 00:09:00.036 ], 00:09:00.036 "driver_specific": { 00:09:00.036 "raid": { 00:09:00.036 "uuid": "0cd7b7d0-4493-4128-a908-2cb9a37a9678", 00:09:00.036 "strip_size_kb": 64, 00:09:00.036 "state": "online", 00:09:00.036 "raid_level": "concat", 00:09:00.036 "superblock": false, 00:09:00.036 "num_base_bdevs": 3, 00:09:00.036 "num_base_bdevs_discovered": 3, 00:09:00.036 "num_base_bdevs_operational": 3, 00:09:00.036 "base_bdevs_list": [ 00:09:00.036 { 00:09:00.036 "name": "NewBaseBdev", 00:09:00.036 "uuid": "887be09a-d057-4b94-9a4c-4381534655bf", 00:09:00.036 "is_configured": true, 00:09:00.036 "data_offset": 0, 00:09:00.036 "data_size": 65536 00:09:00.036 }, 00:09:00.036 { 00:09:00.036 "name": "BaseBdev2", 00:09:00.036 "uuid": "a4e9591e-4111-4dfa-b8d2-e48b5f32d390", 00:09:00.036 "is_configured": true, 00:09:00.036 "data_offset": 0, 00:09:00.036 "data_size": 65536 00:09:00.036 }, 00:09:00.036 { 00:09:00.036 "name": "BaseBdev3", 00:09:00.036 "uuid": "d9f67182-af18-4d03-a694-b426ad7919ae", 00:09:00.036 "is_configured": true, 00:09:00.036 "data_offset": 0, 00:09:00.036 "data_size": 65536 00:09:00.036 } 00:09:00.036 ] 00:09:00.036 } 00:09:00.036 } 00:09:00.036 }' 00:09:00.036 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:00.297 BaseBdev2 00:09:00.297 BaseBdev3' 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.297 [2024-11-18 03:09:03.783171] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:00.297 [2024-11-18 03:09:03.783200] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:00.297 [2024-11-18 03:09:03.783278] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.297 [2024-11-18 03:09:03.783334] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:00.297 [2024-11-18 03:09:03.783346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 76894 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 76894 ']' 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 76894 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76894 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76894' 00:09:00.297 killing process with pid 76894 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 76894 00:09:00.297 [2024-11-18 03:09:03.830766] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:00.297 03:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 76894 00:09:00.297 [2024-11-18 03:09:03.862868] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:00.558 ************************************ 00:09:00.558 END TEST raid_state_function_test 00:09:00.558 ************************************ 00:09:00.558 03:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:00.558 00:09:00.558 real 0m8.905s 00:09:00.558 user 0m15.203s 00:09:00.558 sys 0m1.725s 00:09:00.558 03:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.558 03:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.817 03:09:04 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:00.817 03:09:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:00.817 03:09:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.817 03:09:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:00.817 ************************************ 00:09:00.817 START TEST raid_state_function_test_sb 00:09:00.817 ************************************ 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77498 00:09:00.818 Process raid pid: 77498 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77498' 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77498 00:09:00.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 77498 ']' 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:00.818 03:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.818 [2024-11-18 03:09:04.269122] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:00.818 [2024-11-18 03:09:04.269318] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.078 [2024-11-18 03:09:04.445445] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.078 [2024-11-18 03:09:04.496360] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.078 [2024-11-18 03:09:04.539346] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.078 [2024-11-18 03:09:04.539472] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.649 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:01.649 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:01.649 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:01.649 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.649 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.649 [2024-11-18 03:09:05.110127] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:01.649 [2024-11-18 03:09:05.110176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:01.649 [2024-11-18 03:09:05.110198] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:01.649 [2024-11-18 03:09:05.110209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:01.649 [2024-11-18 03:09:05.110215] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:01.649 [2024-11-18 03:09:05.110229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:01.649 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.649 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:01.649 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.649 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.649 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.649 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.649 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.649 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.649 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.649 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.649 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.649 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.649 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.649 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.649 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.649 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.649 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.649 "name": "Existed_Raid", 00:09:01.649 "uuid": "b8eb63ce-4194-41ce-a01d-99881c0b6775", 00:09:01.649 "strip_size_kb": 64, 00:09:01.649 "state": "configuring", 00:09:01.649 "raid_level": "concat", 00:09:01.649 "superblock": true, 00:09:01.649 "num_base_bdevs": 3, 00:09:01.649 "num_base_bdevs_discovered": 0, 00:09:01.649 "num_base_bdevs_operational": 3, 00:09:01.649 "base_bdevs_list": [ 00:09:01.649 { 00:09:01.649 "name": "BaseBdev1", 00:09:01.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.649 "is_configured": false, 00:09:01.649 "data_offset": 0, 00:09:01.649 "data_size": 0 00:09:01.649 }, 00:09:01.649 { 00:09:01.649 "name": "BaseBdev2", 00:09:01.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.649 "is_configured": false, 00:09:01.649 "data_offset": 0, 00:09:01.649 "data_size": 0 00:09:01.649 }, 00:09:01.649 { 00:09:01.649 "name": "BaseBdev3", 00:09:01.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.649 "is_configured": false, 00:09:01.649 "data_offset": 0, 00:09:01.649 "data_size": 0 00:09:01.649 } 00:09:01.649 ] 00:09:01.649 }' 00:09:01.649 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.649 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.220 [2024-11-18 03:09:05.541313] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:02.220 [2024-11-18 03:09:05.541428] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.220 [2024-11-18 03:09:05.553330] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:02.220 [2024-11-18 03:09:05.553426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:02.220 [2024-11-18 03:09:05.553472] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:02.220 [2024-11-18 03:09:05.553498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:02.220 [2024-11-18 03:09:05.553519] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:02.220 [2024-11-18 03:09:05.553544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.220 [2024-11-18 03:09:05.574405] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:02.220 BaseBdev1 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.220 [ 00:09:02.220 { 00:09:02.220 "name": "BaseBdev1", 00:09:02.220 "aliases": [ 00:09:02.220 "e4482d4f-7baf-4efc-9652-032edae562a1" 00:09:02.220 ], 00:09:02.220 "product_name": "Malloc disk", 00:09:02.220 "block_size": 512, 00:09:02.220 "num_blocks": 65536, 00:09:02.220 "uuid": "e4482d4f-7baf-4efc-9652-032edae562a1", 00:09:02.220 "assigned_rate_limits": { 00:09:02.220 "rw_ios_per_sec": 0, 00:09:02.220 "rw_mbytes_per_sec": 0, 00:09:02.220 "r_mbytes_per_sec": 0, 00:09:02.220 "w_mbytes_per_sec": 0 00:09:02.220 }, 00:09:02.220 "claimed": true, 00:09:02.220 "claim_type": "exclusive_write", 00:09:02.220 "zoned": false, 00:09:02.220 "supported_io_types": { 00:09:02.220 "read": true, 00:09:02.220 "write": true, 00:09:02.220 "unmap": true, 00:09:02.220 "flush": true, 00:09:02.220 "reset": true, 00:09:02.220 "nvme_admin": false, 00:09:02.220 "nvme_io": false, 00:09:02.220 "nvme_io_md": false, 00:09:02.220 "write_zeroes": true, 00:09:02.220 "zcopy": true, 00:09:02.220 "get_zone_info": false, 00:09:02.220 "zone_management": false, 00:09:02.220 "zone_append": false, 00:09:02.220 "compare": false, 00:09:02.220 "compare_and_write": false, 00:09:02.220 "abort": true, 00:09:02.220 "seek_hole": false, 00:09:02.220 "seek_data": false, 00:09:02.220 "copy": true, 00:09:02.220 "nvme_iov_md": false 00:09:02.220 }, 00:09:02.220 "memory_domains": [ 00:09:02.220 { 00:09:02.220 "dma_device_id": "system", 00:09:02.220 "dma_device_type": 1 00:09:02.220 }, 00:09:02.220 { 00:09:02.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.220 "dma_device_type": 2 00:09:02.220 } 00:09:02.220 ], 00:09:02.220 "driver_specific": {} 00:09:02.220 } 00:09:02.220 ] 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.220 "name": "Existed_Raid", 00:09:02.220 "uuid": "7972a86b-d411-46ed-bff0-fc60d35eb9a1", 00:09:02.220 "strip_size_kb": 64, 00:09:02.220 "state": "configuring", 00:09:02.220 "raid_level": "concat", 00:09:02.220 "superblock": true, 00:09:02.220 "num_base_bdevs": 3, 00:09:02.220 "num_base_bdevs_discovered": 1, 00:09:02.220 "num_base_bdevs_operational": 3, 00:09:02.220 "base_bdevs_list": [ 00:09:02.220 { 00:09:02.220 "name": "BaseBdev1", 00:09:02.220 "uuid": "e4482d4f-7baf-4efc-9652-032edae562a1", 00:09:02.220 "is_configured": true, 00:09:02.220 "data_offset": 2048, 00:09:02.220 "data_size": 63488 00:09:02.220 }, 00:09:02.220 { 00:09:02.220 "name": "BaseBdev2", 00:09:02.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.220 "is_configured": false, 00:09:02.220 "data_offset": 0, 00:09:02.220 "data_size": 0 00:09:02.220 }, 00:09:02.220 { 00:09:02.220 "name": "BaseBdev3", 00:09:02.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.220 "is_configured": false, 00:09:02.220 "data_offset": 0, 00:09:02.220 "data_size": 0 00:09:02.220 } 00:09:02.220 ] 00:09:02.220 }' 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.220 03:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.790 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:02.790 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.790 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.790 [2024-11-18 03:09:06.061697] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:02.790 [2024-11-18 03:09:06.061750] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:02.790 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.790 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:02.790 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.790 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.790 [2024-11-18 03:09:06.073723] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:02.790 [2024-11-18 03:09:06.075794] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:02.790 [2024-11-18 03:09:06.075842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:02.790 [2024-11-18 03:09:06.075853] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:02.790 [2024-11-18 03:09:06.075865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:02.790 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.790 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:02.790 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:02.790 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:02.790 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.790 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.790 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.790 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.790 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.790 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.790 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.790 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.790 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.790 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.790 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.790 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.790 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.790 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.790 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.790 "name": "Existed_Raid", 00:09:02.790 "uuid": "e386932b-43c4-46a6-a621-8512e0e64616", 00:09:02.790 "strip_size_kb": 64, 00:09:02.790 "state": "configuring", 00:09:02.790 "raid_level": "concat", 00:09:02.790 "superblock": true, 00:09:02.790 "num_base_bdevs": 3, 00:09:02.790 "num_base_bdevs_discovered": 1, 00:09:02.790 "num_base_bdevs_operational": 3, 00:09:02.790 "base_bdevs_list": [ 00:09:02.790 { 00:09:02.790 "name": "BaseBdev1", 00:09:02.790 "uuid": "e4482d4f-7baf-4efc-9652-032edae562a1", 00:09:02.790 "is_configured": true, 00:09:02.790 "data_offset": 2048, 00:09:02.790 "data_size": 63488 00:09:02.790 }, 00:09:02.790 { 00:09:02.790 "name": "BaseBdev2", 00:09:02.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.790 "is_configured": false, 00:09:02.790 "data_offset": 0, 00:09:02.790 "data_size": 0 00:09:02.790 }, 00:09:02.790 { 00:09:02.790 "name": "BaseBdev3", 00:09:02.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.790 "is_configured": false, 00:09:02.790 "data_offset": 0, 00:09:02.790 "data_size": 0 00:09:02.790 } 00:09:02.790 ] 00:09:02.790 }' 00:09:02.790 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.790 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.051 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:03.051 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.051 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.051 [2024-11-18 03:09:06.531729] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:03.051 BaseBdev2 00:09:03.051 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.051 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:03.051 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:03.051 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:03.051 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:03.051 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:03.051 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:03.051 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:03.051 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.051 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.051 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.051 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:03.051 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.051 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.051 [ 00:09:03.051 { 00:09:03.051 "name": "BaseBdev2", 00:09:03.051 "aliases": [ 00:09:03.051 "97f6fd5a-de6b-4cd4-b48c-f9fa95c8eb58" 00:09:03.051 ], 00:09:03.051 "product_name": "Malloc disk", 00:09:03.051 "block_size": 512, 00:09:03.051 "num_blocks": 65536, 00:09:03.051 "uuid": "97f6fd5a-de6b-4cd4-b48c-f9fa95c8eb58", 00:09:03.051 "assigned_rate_limits": { 00:09:03.051 "rw_ios_per_sec": 0, 00:09:03.051 "rw_mbytes_per_sec": 0, 00:09:03.051 "r_mbytes_per_sec": 0, 00:09:03.051 "w_mbytes_per_sec": 0 00:09:03.051 }, 00:09:03.051 "claimed": true, 00:09:03.051 "claim_type": "exclusive_write", 00:09:03.051 "zoned": false, 00:09:03.051 "supported_io_types": { 00:09:03.051 "read": true, 00:09:03.051 "write": true, 00:09:03.051 "unmap": true, 00:09:03.051 "flush": true, 00:09:03.051 "reset": true, 00:09:03.051 "nvme_admin": false, 00:09:03.051 "nvme_io": false, 00:09:03.051 "nvme_io_md": false, 00:09:03.051 "write_zeroes": true, 00:09:03.051 "zcopy": true, 00:09:03.051 "get_zone_info": false, 00:09:03.051 "zone_management": false, 00:09:03.051 "zone_append": false, 00:09:03.051 "compare": false, 00:09:03.051 "compare_and_write": false, 00:09:03.051 "abort": true, 00:09:03.051 "seek_hole": false, 00:09:03.051 "seek_data": false, 00:09:03.051 "copy": true, 00:09:03.051 "nvme_iov_md": false 00:09:03.051 }, 00:09:03.051 "memory_domains": [ 00:09:03.051 { 00:09:03.051 "dma_device_id": "system", 00:09:03.051 "dma_device_type": 1 00:09:03.051 }, 00:09:03.051 { 00:09:03.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.051 "dma_device_type": 2 00:09:03.051 } 00:09:03.051 ], 00:09:03.051 "driver_specific": {} 00:09:03.051 } 00:09:03.051 ] 00:09:03.051 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.052 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:03.052 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:03.052 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:03.052 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:03.052 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.052 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.052 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.052 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.052 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.052 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.052 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.052 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.052 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.052 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.052 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.052 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.052 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.052 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.052 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.052 "name": "Existed_Raid", 00:09:03.052 "uuid": "e386932b-43c4-46a6-a621-8512e0e64616", 00:09:03.052 "strip_size_kb": 64, 00:09:03.052 "state": "configuring", 00:09:03.052 "raid_level": "concat", 00:09:03.052 "superblock": true, 00:09:03.052 "num_base_bdevs": 3, 00:09:03.052 "num_base_bdevs_discovered": 2, 00:09:03.052 "num_base_bdevs_operational": 3, 00:09:03.052 "base_bdevs_list": [ 00:09:03.052 { 00:09:03.052 "name": "BaseBdev1", 00:09:03.052 "uuid": "e4482d4f-7baf-4efc-9652-032edae562a1", 00:09:03.052 "is_configured": true, 00:09:03.052 "data_offset": 2048, 00:09:03.052 "data_size": 63488 00:09:03.052 }, 00:09:03.052 { 00:09:03.052 "name": "BaseBdev2", 00:09:03.052 "uuid": "97f6fd5a-de6b-4cd4-b48c-f9fa95c8eb58", 00:09:03.052 "is_configured": true, 00:09:03.052 "data_offset": 2048, 00:09:03.052 "data_size": 63488 00:09:03.052 }, 00:09:03.052 { 00:09:03.052 "name": "BaseBdev3", 00:09:03.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.052 "is_configured": false, 00:09:03.052 "data_offset": 0, 00:09:03.052 "data_size": 0 00:09:03.052 } 00:09:03.052 ] 00:09:03.052 }' 00:09:03.052 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.052 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.623 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:03.623 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.623 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.623 [2024-11-18 03:09:06.998163] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:03.623 BaseBdev3 00:09:03.623 [2024-11-18 03:09:06.998462] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:03.623 [2024-11-18 03:09:06.998506] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:03.623 [2024-11-18 03:09:06.998817] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:03.623 [2024-11-18 03:09:06.998934] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:03.623 [2024-11-18 03:09:06.998943] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:03.623 [2024-11-18 03:09:06.999105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.623 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.623 03:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:03.623 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:03.623 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:03.623 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:03.623 03:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:03.623 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:03.623 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:03.623 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.623 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.623 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.623 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:03.623 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.623 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.623 [ 00:09:03.623 { 00:09:03.623 "name": "BaseBdev3", 00:09:03.623 "aliases": [ 00:09:03.623 "aecfc7a1-13a0-4f03-943f-dad8cd4e6365" 00:09:03.623 ], 00:09:03.623 "product_name": "Malloc disk", 00:09:03.623 "block_size": 512, 00:09:03.623 "num_blocks": 65536, 00:09:03.623 "uuid": "aecfc7a1-13a0-4f03-943f-dad8cd4e6365", 00:09:03.623 "assigned_rate_limits": { 00:09:03.623 "rw_ios_per_sec": 0, 00:09:03.623 "rw_mbytes_per_sec": 0, 00:09:03.623 "r_mbytes_per_sec": 0, 00:09:03.623 "w_mbytes_per_sec": 0 00:09:03.623 }, 00:09:03.623 "claimed": true, 00:09:03.623 "claim_type": "exclusive_write", 00:09:03.623 "zoned": false, 00:09:03.623 "supported_io_types": { 00:09:03.623 "read": true, 00:09:03.623 "write": true, 00:09:03.623 "unmap": true, 00:09:03.623 "flush": true, 00:09:03.623 "reset": true, 00:09:03.623 "nvme_admin": false, 00:09:03.623 "nvme_io": false, 00:09:03.623 "nvme_io_md": false, 00:09:03.623 "write_zeroes": true, 00:09:03.623 "zcopy": true, 00:09:03.623 "get_zone_info": false, 00:09:03.623 "zone_management": false, 00:09:03.623 "zone_append": false, 00:09:03.623 "compare": false, 00:09:03.623 "compare_and_write": false, 00:09:03.623 "abort": true, 00:09:03.623 "seek_hole": false, 00:09:03.623 "seek_data": false, 00:09:03.623 "copy": true, 00:09:03.623 "nvme_iov_md": false 00:09:03.623 }, 00:09:03.623 "memory_domains": [ 00:09:03.623 { 00:09:03.623 "dma_device_id": "system", 00:09:03.623 "dma_device_type": 1 00:09:03.623 }, 00:09:03.623 { 00:09:03.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.623 "dma_device_type": 2 00:09:03.623 } 00:09:03.623 ], 00:09:03.623 "driver_specific": {} 00:09:03.623 } 00:09:03.623 ] 00:09:03.623 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.623 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:03.623 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:03.623 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:03.623 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:03.623 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.623 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.623 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.623 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.623 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.623 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.624 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.624 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.624 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.624 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.624 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.624 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.624 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.624 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.624 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.624 "name": "Existed_Raid", 00:09:03.624 "uuid": "e386932b-43c4-46a6-a621-8512e0e64616", 00:09:03.624 "strip_size_kb": 64, 00:09:03.624 "state": "online", 00:09:03.624 "raid_level": "concat", 00:09:03.624 "superblock": true, 00:09:03.624 "num_base_bdevs": 3, 00:09:03.624 "num_base_bdevs_discovered": 3, 00:09:03.624 "num_base_bdevs_operational": 3, 00:09:03.624 "base_bdevs_list": [ 00:09:03.624 { 00:09:03.624 "name": "BaseBdev1", 00:09:03.624 "uuid": "e4482d4f-7baf-4efc-9652-032edae562a1", 00:09:03.624 "is_configured": true, 00:09:03.624 "data_offset": 2048, 00:09:03.624 "data_size": 63488 00:09:03.624 }, 00:09:03.624 { 00:09:03.624 "name": "BaseBdev2", 00:09:03.624 "uuid": "97f6fd5a-de6b-4cd4-b48c-f9fa95c8eb58", 00:09:03.624 "is_configured": true, 00:09:03.624 "data_offset": 2048, 00:09:03.624 "data_size": 63488 00:09:03.624 }, 00:09:03.624 { 00:09:03.624 "name": "BaseBdev3", 00:09:03.624 "uuid": "aecfc7a1-13a0-4f03-943f-dad8cd4e6365", 00:09:03.624 "is_configured": true, 00:09:03.624 "data_offset": 2048, 00:09:03.624 "data_size": 63488 00:09:03.624 } 00:09:03.624 ] 00:09:03.624 }' 00:09:03.624 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.624 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:04.193 [2024-11-18 03:09:07.513664] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:04.193 "name": "Existed_Raid", 00:09:04.193 "aliases": [ 00:09:04.193 "e386932b-43c4-46a6-a621-8512e0e64616" 00:09:04.193 ], 00:09:04.193 "product_name": "Raid Volume", 00:09:04.193 "block_size": 512, 00:09:04.193 "num_blocks": 190464, 00:09:04.193 "uuid": "e386932b-43c4-46a6-a621-8512e0e64616", 00:09:04.193 "assigned_rate_limits": { 00:09:04.193 "rw_ios_per_sec": 0, 00:09:04.193 "rw_mbytes_per_sec": 0, 00:09:04.193 "r_mbytes_per_sec": 0, 00:09:04.193 "w_mbytes_per_sec": 0 00:09:04.193 }, 00:09:04.193 "claimed": false, 00:09:04.193 "zoned": false, 00:09:04.193 "supported_io_types": { 00:09:04.193 "read": true, 00:09:04.193 "write": true, 00:09:04.193 "unmap": true, 00:09:04.193 "flush": true, 00:09:04.193 "reset": true, 00:09:04.193 "nvme_admin": false, 00:09:04.193 "nvme_io": false, 00:09:04.193 "nvme_io_md": false, 00:09:04.193 "write_zeroes": true, 00:09:04.193 "zcopy": false, 00:09:04.193 "get_zone_info": false, 00:09:04.193 "zone_management": false, 00:09:04.193 "zone_append": false, 00:09:04.193 "compare": false, 00:09:04.193 "compare_and_write": false, 00:09:04.193 "abort": false, 00:09:04.193 "seek_hole": false, 00:09:04.193 "seek_data": false, 00:09:04.193 "copy": false, 00:09:04.193 "nvme_iov_md": false 00:09:04.193 }, 00:09:04.193 "memory_domains": [ 00:09:04.193 { 00:09:04.193 "dma_device_id": "system", 00:09:04.193 "dma_device_type": 1 00:09:04.193 }, 00:09:04.193 { 00:09:04.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.193 "dma_device_type": 2 00:09:04.193 }, 00:09:04.193 { 00:09:04.193 "dma_device_id": "system", 00:09:04.193 "dma_device_type": 1 00:09:04.193 }, 00:09:04.193 { 00:09:04.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.193 "dma_device_type": 2 00:09:04.193 }, 00:09:04.193 { 00:09:04.193 "dma_device_id": "system", 00:09:04.193 "dma_device_type": 1 00:09:04.193 }, 00:09:04.193 { 00:09:04.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.193 "dma_device_type": 2 00:09:04.193 } 00:09:04.193 ], 00:09:04.193 "driver_specific": { 00:09:04.193 "raid": { 00:09:04.193 "uuid": "e386932b-43c4-46a6-a621-8512e0e64616", 00:09:04.193 "strip_size_kb": 64, 00:09:04.193 "state": "online", 00:09:04.193 "raid_level": "concat", 00:09:04.193 "superblock": true, 00:09:04.193 "num_base_bdevs": 3, 00:09:04.193 "num_base_bdevs_discovered": 3, 00:09:04.193 "num_base_bdevs_operational": 3, 00:09:04.193 "base_bdevs_list": [ 00:09:04.193 { 00:09:04.193 "name": "BaseBdev1", 00:09:04.193 "uuid": "e4482d4f-7baf-4efc-9652-032edae562a1", 00:09:04.193 "is_configured": true, 00:09:04.193 "data_offset": 2048, 00:09:04.193 "data_size": 63488 00:09:04.193 }, 00:09:04.193 { 00:09:04.193 "name": "BaseBdev2", 00:09:04.193 "uuid": "97f6fd5a-de6b-4cd4-b48c-f9fa95c8eb58", 00:09:04.193 "is_configured": true, 00:09:04.193 "data_offset": 2048, 00:09:04.193 "data_size": 63488 00:09:04.193 }, 00:09:04.193 { 00:09:04.193 "name": "BaseBdev3", 00:09:04.193 "uuid": "aecfc7a1-13a0-4f03-943f-dad8cd4e6365", 00:09:04.193 "is_configured": true, 00:09:04.193 "data_offset": 2048, 00:09:04.193 "data_size": 63488 00:09:04.193 } 00:09:04.193 ] 00:09:04.193 } 00:09:04.193 } 00:09:04.193 }' 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:04.193 BaseBdev2 00:09:04.193 BaseBdev3' 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.193 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.194 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:04.194 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.194 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.194 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.194 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.453 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.453 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.454 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:04.454 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.454 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.454 [2024-11-18 03:09:07.784991] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:04.454 [2024-11-18 03:09:07.785021] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:04.454 [2024-11-18 03:09:07.785081] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:04.454 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.454 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:04.454 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:04.454 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:04.454 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:04.454 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:04.454 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:04.454 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.454 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:04.454 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.454 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.454 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:04.454 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.454 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.454 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.454 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.454 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.454 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.454 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.454 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.454 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.454 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.454 "name": "Existed_Raid", 00:09:04.454 "uuid": "e386932b-43c4-46a6-a621-8512e0e64616", 00:09:04.454 "strip_size_kb": 64, 00:09:04.454 "state": "offline", 00:09:04.454 "raid_level": "concat", 00:09:04.454 "superblock": true, 00:09:04.454 "num_base_bdevs": 3, 00:09:04.454 "num_base_bdevs_discovered": 2, 00:09:04.454 "num_base_bdevs_operational": 2, 00:09:04.454 "base_bdevs_list": [ 00:09:04.454 { 00:09:04.454 "name": null, 00:09:04.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.454 "is_configured": false, 00:09:04.454 "data_offset": 0, 00:09:04.454 "data_size": 63488 00:09:04.454 }, 00:09:04.454 { 00:09:04.454 "name": "BaseBdev2", 00:09:04.454 "uuid": "97f6fd5a-de6b-4cd4-b48c-f9fa95c8eb58", 00:09:04.454 "is_configured": true, 00:09:04.454 "data_offset": 2048, 00:09:04.454 "data_size": 63488 00:09:04.454 }, 00:09:04.454 { 00:09:04.454 "name": "BaseBdev3", 00:09:04.454 "uuid": "aecfc7a1-13a0-4f03-943f-dad8cd4e6365", 00:09:04.454 "is_configured": true, 00:09:04.454 "data_offset": 2048, 00:09:04.454 "data_size": 63488 00:09:04.454 } 00:09:04.454 ] 00:09:04.454 }' 00:09:04.454 03:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.454 03:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.714 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:04.714 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:04.714 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.714 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.714 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.714 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:04.714 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.714 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:04.714 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:04.714 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:04.714 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.714 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.714 [2024-11-18 03:09:08.243837] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:04.714 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.714 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:04.714 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:04.714 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:04.714 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.714 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.714 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.714 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.974 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:04.974 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:04.974 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:04.974 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.974 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.974 [2024-11-18 03:09:08.307385] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:04.974 [2024-11-18 03:09:08.307444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:04.974 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.974 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:04.974 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:04.974 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.974 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:04.974 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.974 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.974 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.974 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:04.974 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:04.974 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:04.974 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:04.974 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:04.974 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:04.974 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.974 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.974 BaseBdev2 00:09:04.974 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.974 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:04.974 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:04.974 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:04.974 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:04.974 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.975 [ 00:09:04.975 { 00:09:04.975 "name": "BaseBdev2", 00:09:04.975 "aliases": [ 00:09:04.975 "b883c190-215b-495c-a4fb-8a6743c2f55f" 00:09:04.975 ], 00:09:04.975 "product_name": "Malloc disk", 00:09:04.975 "block_size": 512, 00:09:04.975 "num_blocks": 65536, 00:09:04.975 "uuid": "b883c190-215b-495c-a4fb-8a6743c2f55f", 00:09:04.975 "assigned_rate_limits": { 00:09:04.975 "rw_ios_per_sec": 0, 00:09:04.975 "rw_mbytes_per_sec": 0, 00:09:04.975 "r_mbytes_per_sec": 0, 00:09:04.975 "w_mbytes_per_sec": 0 00:09:04.975 }, 00:09:04.975 "claimed": false, 00:09:04.975 "zoned": false, 00:09:04.975 "supported_io_types": { 00:09:04.975 "read": true, 00:09:04.975 "write": true, 00:09:04.975 "unmap": true, 00:09:04.975 "flush": true, 00:09:04.975 "reset": true, 00:09:04.975 "nvme_admin": false, 00:09:04.975 "nvme_io": false, 00:09:04.975 "nvme_io_md": false, 00:09:04.975 "write_zeroes": true, 00:09:04.975 "zcopy": true, 00:09:04.975 "get_zone_info": false, 00:09:04.975 "zone_management": false, 00:09:04.975 "zone_append": false, 00:09:04.975 "compare": false, 00:09:04.975 "compare_and_write": false, 00:09:04.975 "abort": true, 00:09:04.975 "seek_hole": false, 00:09:04.975 "seek_data": false, 00:09:04.975 "copy": true, 00:09:04.975 "nvme_iov_md": false 00:09:04.975 }, 00:09:04.975 "memory_domains": [ 00:09:04.975 { 00:09:04.975 "dma_device_id": "system", 00:09:04.975 "dma_device_type": 1 00:09:04.975 }, 00:09:04.975 { 00:09:04.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.975 "dma_device_type": 2 00:09:04.975 } 00:09:04.975 ], 00:09:04.975 "driver_specific": {} 00:09:04.975 } 00:09:04.975 ] 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.975 BaseBdev3 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.975 [ 00:09:04.975 { 00:09:04.975 "name": "BaseBdev3", 00:09:04.975 "aliases": [ 00:09:04.975 "9dfb7ea6-dd47-4afe-a8ee-e64c72804209" 00:09:04.975 ], 00:09:04.975 "product_name": "Malloc disk", 00:09:04.975 "block_size": 512, 00:09:04.975 "num_blocks": 65536, 00:09:04.975 "uuid": "9dfb7ea6-dd47-4afe-a8ee-e64c72804209", 00:09:04.975 "assigned_rate_limits": { 00:09:04.975 "rw_ios_per_sec": 0, 00:09:04.975 "rw_mbytes_per_sec": 0, 00:09:04.975 "r_mbytes_per_sec": 0, 00:09:04.975 "w_mbytes_per_sec": 0 00:09:04.975 }, 00:09:04.975 "claimed": false, 00:09:04.975 "zoned": false, 00:09:04.975 "supported_io_types": { 00:09:04.975 "read": true, 00:09:04.975 "write": true, 00:09:04.975 "unmap": true, 00:09:04.975 "flush": true, 00:09:04.975 "reset": true, 00:09:04.975 "nvme_admin": false, 00:09:04.975 "nvme_io": false, 00:09:04.975 "nvme_io_md": false, 00:09:04.975 "write_zeroes": true, 00:09:04.975 "zcopy": true, 00:09:04.975 "get_zone_info": false, 00:09:04.975 "zone_management": false, 00:09:04.975 "zone_append": false, 00:09:04.975 "compare": false, 00:09:04.975 "compare_and_write": false, 00:09:04.975 "abort": true, 00:09:04.975 "seek_hole": false, 00:09:04.975 "seek_data": false, 00:09:04.975 "copy": true, 00:09:04.975 "nvme_iov_md": false 00:09:04.975 }, 00:09:04.975 "memory_domains": [ 00:09:04.975 { 00:09:04.975 "dma_device_id": "system", 00:09:04.975 "dma_device_type": 1 00:09:04.975 }, 00:09:04.975 { 00:09:04.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.975 "dma_device_type": 2 00:09:04.975 } 00:09:04.975 ], 00:09:04.975 "driver_specific": {} 00:09:04.975 } 00:09:04.975 ] 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.975 [2024-11-18 03:09:08.493473] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:04.975 [2024-11-18 03:09:08.493577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:04.975 [2024-11-18 03:09:08.493626] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:04.975 [2024-11-18 03:09:08.495707] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.975 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.236 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.236 "name": "Existed_Raid", 00:09:05.236 "uuid": "27bb4517-a604-4758-8c1d-b1896c4a276f", 00:09:05.236 "strip_size_kb": 64, 00:09:05.236 "state": "configuring", 00:09:05.236 "raid_level": "concat", 00:09:05.236 "superblock": true, 00:09:05.236 "num_base_bdevs": 3, 00:09:05.236 "num_base_bdevs_discovered": 2, 00:09:05.236 "num_base_bdevs_operational": 3, 00:09:05.236 "base_bdevs_list": [ 00:09:05.236 { 00:09:05.236 "name": "BaseBdev1", 00:09:05.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.236 "is_configured": false, 00:09:05.236 "data_offset": 0, 00:09:05.236 "data_size": 0 00:09:05.236 }, 00:09:05.236 { 00:09:05.236 "name": "BaseBdev2", 00:09:05.236 "uuid": "b883c190-215b-495c-a4fb-8a6743c2f55f", 00:09:05.236 "is_configured": true, 00:09:05.236 "data_offset": 2048, 00:09:05.236 "data_size": 63488 00:09:05.236 }, 00:09:05.236 { 00:09:05.236 "name": "BaseBdev3", 00:09:05.236 "uuid": "9dfb7ea6-dd47-4afe-a8ee-e64c72804209", 00:09:05.236 "is_configured": true, 00:09:05.236 "data_offset": 2048, 00:09:05.236 "data_size": 63488 00:09:05.236 } 00:09:05.236 ] 00:09:05.236 }' 00:09:05.236 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.236 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.496 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:05.496 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.496 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.496 [2024-11-18 03:09:08.956641] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:05.496 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.496 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:05.496 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.496 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.496 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.496 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.496 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.496 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.496 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.496 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.496 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.496 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.496 03:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.496 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.496 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.496 03:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.496 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.496 "name": "Existed_Raid", 00:09:05.496 "uuid": "27bb4517-a604-4758-8c1d-b1896c4a276f", 00:09:05.496 "strip_size_kb": 64, 00:09:05.496 "state": "configuring", 00:09:05.496 "raid_level": "concat", 00:09:05.496 "superblock": true, 00:09:05.496 "num_base_bdevs": 3, 00:09:05.496 "num_base_bdevs_discovered": 1, 00:09:05.496 "num_base_bdevs_operational": 3, 00:09:05.496 "base_bdevs_list": [ 00:09:05.496 { 00:09:05.496 "name": "BaseBdev1", 00:09:05.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.496 "is_configured": false, 00:09:05.496 "data_offset": 0, 00:09:05.496 "data_size": 0 00:09:05.496 }, 00:09:05.496 { 00:09:05.496 "name": null, 00:09:05.496 "uuid": "b883c190-215b-495c-a4fb-8a6743c2f55f", 00:09:05.496 "is_configured": false, 00:09:05.496 "data_offset": 0, 00:09:05.496 "data_size": 63488 00:09:05.496 }, 00:09:05.496 { 00:09:05.496 "name": "BaseBdev3", 00:09:05.496 "uuid": "9dfb7ea6-dd47-4afe-a8ee-e64c72804209", 00:09:05.496 "is_configured": true, 00:09:05.496 "data_offset": 2048, 00:09:05.496 "data_size": 63488 00:09:05.496 } 00:09:05.496 ] 00:09:05.496 }' 00:09:05.496 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.496 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.073 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:06.073 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.073 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.073 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.073 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.073 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:06.073 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:06.073 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.073 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.073 [2024-11-18 03:09:09.415207] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:06.073 BaseBdev1 00:09:06.073 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.073 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:06.073 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:06.073 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:06.073 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:06.073 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:06.073 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:06.073 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:06.073 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.073 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.073 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.073 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:06.073 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.073 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.073 [ 00:09:06.073 { 00:09:06.073 "name": "BaseBdev1", 00:09:06.073 "aliases": [ 00:09:06.073 "7f1cfc0d-16c0-4263-aa09-6aaa49a0b6e6" 00:09:06.073 ], 00:09:06.073 "product_name": "Malloc disk", 00:09:06.073 "block_size": 512, 00:09:06.073 "num_blocks": 65536, 00:09:06.073 "uuid": "7f1cfc0d-16c0-4263-aa09-6aaa49a0b6e6", 00:09:06.073 "assigned_rate_limits": { 00:09:06.073 "rw_ios_per_sec": 0, 00:09:06.073 "rw_mbytes_per_sec": 0, 00:09:06.073 "r_mbytes_per_sec": 0, 00:09:06.073 "w_mbytes_per_sec": 0 00:09:06.073 }, 00:09:06.073 "claimed": true, 00:09:06.073 "claim_type": "exclusive_write", 00:09:06.073 "zoned": false, 00:09:06.074 "supported_io_types": { 00:09:06.074 "read": true, 00:09:06.074 "write": true, 00:09:06.074 "unmap": true, 00:09:06.074 "flush": true, 00:09:06.074 "reset": true, 00:09:06.074 "nvme_admin": false, 00:09:06.074 "nvme_io": false, 00:09:06.074 "nvme_io_md": false, 00:09:06.074 "write_zeroes": true, 00:09:06.074 "zcopy": true, 00:09:06.074 "get_zone_info": false, 00:09:06.074 "zone_management": false, 00:09:06.074 "zone_append": false, 00:09:06.074 "compare": false, 00:09:06.074 "compare_and_write": false, 00:09:06.074 "abort": true, 00:09:06.074 "seek_hole": false, 00:09:06.074 "seek_data": false, 00:09:06.074 "copy": true, 00:09:06.074 "nvme_iov_md": false 00:09:06.074 }, 00:09:06.074 "memory_domains": [ 00:09:06.074 { 00:09:06.074 "dma_device_id": "system", 00:09:06.074 "dma_device_type": 1 00:09:06.074 }, 00:09:06.074 { 00:09:06.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.074 "dma_device_type": 2 00:09:06.074 } 00:09:06.074 ], 00:09:06.074 "driver_specific": {} 00:09:06.074 } 00:09:06.074 ] 00:09:06.074 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.074 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:06.074 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:06.074 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.074 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.074 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.074 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.074 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.074 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.074 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.074 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.074 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.074 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.074 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.074 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.074 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.074 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.074 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.074 "name": "Existed_Raid", 00:09:06.074 "uuid": "27bb4517-a604-4758-8c1d-b1896c4a276f", 00:09:06.074 "strip_size_kb": 64, 00:09:06.074 "state": "configuring", 00:09:06.074 "raid_level": "concat", 00:09:06.074 "superblock": true, 00:09:06.074 "num_base_bdevs": 3, 00:09:06.074 "num_base_bdevs_discovered": 2, 00:09:06.074 "num_base_bdevs_operational": 3, 00:09:06.074 "base_bdevs_list": [ 00:09:06.074 { 00:09:06.074 "name": "BaseBdev1", 00:09:06.074 "uuid": "7f1cfc0d-16c0-4263-aa09-6aaa49a0b6e6", 00:09:06.074 "is_configured": true, 00:09:06.074 "data_offset": 2048, 00:09:06.074 "data_size": 63488 00:09:06.074 }, 00:09:06.074 { 00:09:06.074 "name": null, 00:09:06.074 "uuid": "b883c190-215b-495c-a4fb-8a6743c2f55f", 00:09:06.074 "is_configured": false, 00:09:06.074 "data_offset": 0, 00:09:06.074 "data_size": 63488 00:09:06.074 }, 00:09:06.074 { 00:09:06.074 "name": "BaseBdev3", 00:09:06.074 "uuid": "9dfb7ea6-dd47-4afe-a8ee-e64c72804209", 00:09:06.074 "is_configured": true, 00:09:06.074 "data_offset": 2048, 00:09:06.074 "data_size": 63488 00:09:06.074 } 00:09:06.074 ] 00:09:06.074 }' 00:09:06.074 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.074 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.334 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.334 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.334 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.334 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:06.334 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.593 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:06.593 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:06.593 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.593 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.593 [2024-11-18 03:09:09.934406] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:06.593 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.593 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:06.593 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.593 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.593 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.593 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.593 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.593 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.593 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.593 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.593 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.593 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.593 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.593 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.593 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.593 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.593 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.593 "name": "Existed_Raid", 00:09:06.593 "uuid": "27bb4517-a604-4758-8c1d-b1896c4a276f", 00:09:06.593 "strip_size_kb": 64, 00:09:06.593 "state": "configuring", 00:09:06.593 "raid_level": "concat", 00:09:06.593 "superblock": true, 00:09:06.593 "num_base_bdevs": 3, 00:09:06.593 "num_base_bdevs_discovered": 1, 00:09:06.593 "num_base_bdevs_operational": 3, 00:09:06.593 "base_bdevs_list": [ 00:09:06.593 { 00:09:06.593 "name": "BaseBdev1", 00:09:06.593 "uuid": "7f1cfc0d-16c0-4263-aa09-6aaa49a0b6e6", 00:09:06.593 "is_configured": true, 00:09:06.593 "data_offset": 2048, 00:09:06.593 "data_size": 63488 00:09:06.593 }, 00:09:06.593 { 00:09:06.593 "name": null, 00:09:06.593 "uuid": "b883c190-215b-495c-a4fb-8a6743c2f55f", 00:09:06.593 "is_configured": false, 00:09:06.593 "data_offset": 0, 00:09:06.593 "data_size": 63488 00:09:06.593 }, 00:09:06.593 { 00:09:06.593 "name": null, 00:09:06.593 "uuid": "9dfb7ea6-dd47-4afe-a8ee-e64c72804209", 00:09:06.593 "is_configured": false, 00:09:06.593 "data_offset": 0, 00:09:06.593 "data_size": 63488 00:09:06.593 } 00:09:06.593 ] 00:09:06.593 }' 00:09:06.593 03:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.593 03:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.854 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.854 03:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.854 03:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.854 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:06.854 03:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.854 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:06.854 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:06.854 03:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.854 03:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.854 [2024-11-18 03:09:10.409679] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:06.854 03:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.854 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:06.854 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.854 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.854 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.854 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.854 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.854 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.854 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.854 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.854 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.854 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.854 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.854 03:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.854 03:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.113 03:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.113 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.113 "name": "Existed_Raid", 00:09:07.113 "uuid": "27bb4517-a604-4758-8c1d-b1896c4a276f", 00:09:07.113 "strip_size_kb": 64, 00:09:07.113 "state": "configuring", 00:09:07.113 "raid_level": "concat", 00:09:07.113 "superblock": true, 00:09:07.113 "num_base_bdevs": 3, 00:09:07.113 "num_base_bdevs_discovered": 2, 00:09:07.113 "num_base_bdevs_operational": 3, 00:09:07.113 "base_bdevs_list": [ 00:09:07.113 { 00:09:07.113 "name": "BaseBdev1", 00:09:07.113 "uuid": "7f1cfc0d-16c0-4263-aa09-6aaa49a0b6e6", 00:09:07.113 "is_configured": true, 00:09:07.113 "data_offset": 2048, 00:09:07.113 "data_size": 63488 00:09:07.113 }, 00:09:07.113 { 00:09:07.113 "name": null, 00:09:07.113 "uuid": "b883c190-215b-495c-a4fb-8a6743c2f55f", 00:09:07.113 "is_configured": false, 00:09:07.113 "data_offset": 0, 00:09:07.113 "data_size": 63488 00:09:07.113 }, 00:09:07.113 { 00:09:07.113 "name": "BaseBdev3", 00:09:07.113 "uuid": "9dfb7ea6-dd47-4afe-a8ee-e64c72804209", 00:09:07.113 "is_configured": true, 00:09:07.113 "data_offset": 2048, 00:09:07.113 "data_size": 63488 00:09:07.113 } 00:09:07.113 ] 00:09:07.113 }' 00:09:07.113 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.113 03:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.374 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.374 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:07.374 03:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.374 03:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.374 03:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.374 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:07.374 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:07.374 03:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.374 03:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.374 [2024-11-18 03:09:10.868911] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:07.374 03:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.374 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:07.374 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.374 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.374 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.374 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.374 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.374 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.374 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.374 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.374 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.374 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.374 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.374 03:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.374 03:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.374 03:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.374 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.374 "name": "Existed_Raid", 00:09:07.374 "uuid": "27bb4517-a604-4758-8c1d-b1896c4a276f", 00:09:07.374 "strip_size_kb": 64, 00:09:07.374 "state": "configuring", 00:09:07.374 "raid_level": "concat", 00:09:07.374 "superblock": true, 00:09:07.374 "num_base_bdevs": 3, 00:09:07.374 "num_base_bdevs_discovered": 1, 00:09:07.374 "num_base_bdevs_operational": 3, 00:09:07.374 "base_bdevs_list": [ 00:09:07.374 { 00:09:07.374 "name": null, 00:09:07.374 "uuid": "7f1cfc0d-16c0-4263-aa09-6aaa49a0b6e6", 00:09:07.374 "is_configured": false, 00:09:07.374 "data_offset": 0, 00:09:07.374 "data_size": 63488 00:09:07.374 }, 00:09:07.374 { 00:09:07.374 "name": null, 00:09:07.374 "uuid": "b883c190-215b-495c-a4fb-8a6743c2f55f", 00:09:07.374 "is_configured": false, 00:09:07.374 "data_offset": 0, 00:09:07.374 "data_size": 63488 00:09:07.374 }, 00:09:07.374 { 00:09:07.374 "name": "BaseBdev3", 00:09:07.374 "uuid": "9dfb7ea6-dd47-4afe-a8ee-e64c72804209", 00:09:07.374 "is_configured": true, 00:09:07.374 "data_offset": 2048, 00:09:07.374 "data_size": 63488 00:09:07.374 } 00:09:07.374 ] 00:09:07.374 }' 00:09:07.374 03:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.374 03:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.944 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:07.944 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.944 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.944 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.944 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.944 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:07.944 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:07.944 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.944 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.944 [2024-11-18 03:09:11.366881] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:07.944 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.944 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:07.944 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.944 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.944 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.944 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.944 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.944 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.944 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.944 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.944 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.944 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.944 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.944 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.944 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.944 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.944 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.944 "name": "Existed_Raid", 00:09:07.944 "uuid": "27bb4517-a604-4758-8c1d-b1896c4a276f", 00:09:07.944 "strip_size_kb": 64, 00:09:07.944 "state": "configuring", 00:09:07.944 "raid_level": "concat", 00:09:07.944 "superblock": true, 00:09:07.944 "num_base_bdevs": 3, 00:09:07.944 "num_base_bdevs_discovered": 2, 00:09:07.944 "num_base_bdevs_operational": 3, 00:09:07.944 "base_bdevs_list": [ 00:09:07.944 { 00:09:07.944 "name": null, 00:09:07.944 "uuid": "7f1cfc0d-16c0-4263-aa09-6aaa49a0b6e6", 00:09:07.944 "is_configured": false, 00:09:07.944 "data_offset": 0, 00:09:07.944 "data_size": 63488 00:09:07.944 }, 00:09:07.944 { 00:09:07.944 "name": "BaseBdev2", 00:09:07.944 "uuid": "b883c190-215b-495c-a4fb-8a6743c2f55f", 00:09:07.944 "is_configured": true, 00:09:07.944 "data_offset": 2048, 00:09:07.944 "data_size": 63488 00:09:07.944 }, 00:09:07.944 { 00:09:07.944 "name": "BaseBdev3", 00:09:07.944 "uuid": "9dfb7ea6-dd47-4afe-a8ee-e64c72804209", 00:09:07.944 "is_configured": true, 00:09:07.944 "data_offset": 2048, 00:09:07.944 "data_size": 63488 00:09:07.944 } 00:09:07.944 ] 00:09:07.944 }' 00:09:07.944 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.944 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.514 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.514 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:08.514 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.514 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.514 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.514 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:08.514 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.514 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.514 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.514 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:08.514 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.514 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7f1cfc0d-16c0-4263-aa09-6aaa49a0b6e6 00:09:08.514 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.514 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.514 [2024-11-18 03:09:11.925237] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:08.514 [2024-11-18 03:09:11.925429] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:08.514 [2024-11-18 03:09:11.925445] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:08.514 [2024-11-18 03:09:11.925688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:08.514 NewBaseBdev 00:09:08.514 [2024-11-18 03:09:11.925796] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:08.514 [2024-11-18 03:09:11.925806] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:08.514 [2024-11-18 03:09:11.925930] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.515 [ 00:09:08.515 { 00:09:08.515 "name": "NewBaseBdev", 00:09:08.515 "aliases": [ 00:09:08.515 "7f1cfc0d-16c0-4263-aa09-6aaa49a0b6e6" 00:09:08.515 ], 00:09:08.515 "product_name": "Malloc disk", 00:09:08.515 "block_size": 512, 00:09:08.515 "num_blocks": 65536, 00:09:08.515 "uuid": "7f1cfc0d-16c0-4263-aa09-6aaa49a0b6e6", 00:09:08.515 "assigned_rate_limits": { 00:09:08.515 "rw_ios_per_sec": 0, 00:09:08.515 "rw_mbytes_per_sec": 0, 00:09:08.515 "r_mbytes_per_sec": 0, 00:09:08.515 "w_mbytes_per_sec": 0 00:09:08.515 }, 00:09:08.515 "claimed": true, 00:09:08.515 "claim_type": "exclusive_write", 00:09:08.515 "zoned": false, 00:09:08.515 "supported_io_types": { 00:09:08.515 "read": true, 00:09:08.515 "write": true, 00:09:08.515 "unmap": true, 00:09:08.515 "flush": true, 00:09:08.515 "reset": true, 00:09:08.515 "nvme_admin": false, 00:09:08.515 "nvme_io": false, 00:09:08.515 "nvme_io_md": false, 00:09:08.515 "write_zeroes": true, 00:09:08.515 "zcopy": true, 00:09:08.515 "get_zone_info": false, 00:09:08.515 "zone_management": false, 00:09:08.515 "zone_append": false, 00:09:08.515 "compare": false, 00:09:08.515 "compare_and_write": false, 00:09:08.515 "abort": true, 00:09:08.515 "seek_hole": false, 00:09:08.515 "seek_data": false, 00:09:08.515 "copy": true, 00:09:08.515 "nvme_iov_md": false 00:09:08.515 }, 00:09:08.515 "memory_domains": [ 00:09:08.515 { 00:09:08.515 "dma_device_id": "system", 00:09:08.515 "dma_device_type": 1 00:09:08.515 }, 00:09:08.515 { 00:09:08.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.515 "dma_device_type": 2 00:09:08.515 } 00:09:08.515 ], 00:09:08.515 "driver_specific": {} 00:09:08.515 } 00:09:08.515 ] 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.515 03:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.515 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.515 "name": "Existed_Raid", 00:09:08.515 "uuid": "27bb4517-a604-4758-8c1d-b1896c4a276f", 00:09:08.515 "strip_size_kb": 64, 00:09:08.515 "state": "online", 00:09:08.515 "raid_level": "concat", 00:09:08.515 "superblock": true, 00:09:08.515 "num_base_bdevs": 3, 00:09:08.515 "num_base_bdevs_discovered": 3, 00:09:08.515 "num_base_bdevs_operational": 3, 00:09:08.515 "base_bdevs_list": [ 00:09:08.515 { 00:09:08.515 "name": "NewBaseBdev", 00:09:08.515 "uuid": "7f1cfc0d-16c0-4263-aa09-6aaa49a0b6e6", 00:09:08.515 "is_configured": true, 00:09:08.515 "data_offset": 2048, 00:09:08.515 "data_size": 63488 00:09:08.515 }, 00:09:08.515 { 00:09:08.515 "name": "BaseBdev2", 00:09:08.515 "uuid": "b883c190-215b-495c-a4fb-8a6743c2f55f", 00:09:08.515 "is_configured": true, 00:09:08.515 "data_offset": 2048, 00:09:08.515 "data_size": 63488 00:09:08.515 }, 00:09:08.515 { 00:09:08.515 "name": "BaseBdev3", 00:09:08.515 "uuid": "9dfb7ea6-dd47-4afe-a8ee-e64c72804209", 00:09:08.515 "is_configured": true, 00:09:08.515 "data_offset": 2048, 00:09:08.515 "data_size": 63488 00:09:08.515 } 00:09:08.515 ] 00:09:08.515 }' 00:09:08.515 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.515 03:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.087 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:09.087 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:09.087 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:09.087 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:09.087 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:09.087 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:09.087 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:09.087 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:09.087 03:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.087 03:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.087 [2024-11-18 03:09:12.432723] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.087 03:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.087 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:09.087 "name": "Existed_Raid", 00:09:09.087 "aliases": [ 00:09:09.087 "27bb4517-a604-4758-8c1d-b1896c4a276f" 00:09:09.087 ], 00:09:09.087 "product_name": "Raid Volume", 00:09:09.087 "block_size": 512, 00:09:09.087 "num_blocks": 190464, 00:09:09.087 "uuid": "27bb4517-a604-4758-8c1d-b1896c4a276f", 00:09:09.087 "assigned_rate_limits": { 00:09:09.087 "rw_ios_per_sec": 0, 00:09:09.087 "rw_mbytes_per_sec": 0, 00:09:09.087 "r_mbytes_per_sec": 0, 00:09:09.087 "w_mbytes_per_sec": 0 00:09:09.087 }, 00:09:09.087 "claimed": false, 00:09:09.087 "zoned": false, 00:09:09.087 "supported_io_types": { 00:09:09.087 "read": true, 00:09:09.087 "write": true, 00:09:09.087 "unmap": true, 00:09:09.087 "flush": true, 00:09:09.087 "reset": true, 00:09:09.087 "nvme_admin": false, 00:09:09.087 "nvme_io": false, 00:09:09.087 "nvme_io_md": false, 00:09:09.087 "write_zeroes": true, 00:09:09.087 "zcopy": false, 00:09:09.087 "get_zone_info": false, 00:09:09.087 "zone_management": false, 00:09:09.087 "zone_append": false, 00:09:09.087 "compare": false, 00:09:09.087 "compare_and_write": false, 00:09:09.087 "abort": false, 00:09:09.087 "seek_hole": false, 00:09:09.087 "seek_data": false, 00:09:09.087 "copy": false, 00:09:09.087 "nvme_iov_md": false 00:09:09.087 }, 00:09:09.087 "memory_domains": [ 00:09:09.087 { 00:09:09.087 "dma_device_id": "system", 00:09:09.087 "dma_device_type": 1 00:09:09.087 }, 00:09:09.087 { 00:09:09.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.087 "dma_device_type": 2 00:09:09.087 }, 00:09:09.087 { 00:09:09.087 "dma_device_id": "system", 00:09:09.087 "dma_device_type": 1 00:09:09.087 }, 00:09:09.087 { 00:09:09.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.087 "dma_device_type": 2 00:09:09.087 }, 00:09:09.087 { 00:09:09.087 "dma_device_id": "system", 00:09:09.087 "dma_device_type": 1 00:09:09.087 }, 00:09:09.087 { 00:09:09.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.087 "dma_device_type": 2 00:09:09.087 } 00:09:09.087 ], 00:09:09.087 "driver_specific": { 00:09:09.087 "raid": { 00:09:09.087 "uuid": "27bb4517-a604-4758-8c1d-b1896c4a276f", 00:09:09.087 "strip_size_kb": 64, 00:09:09.087 "state": "online", 00:09:09.087 "raid_level": "concat", 00:09:09.087 "superblock": true, 00:09:09.087 "num_base_bdevs": 3, 00:09:09.087 "num_base_bdevs_discovered": 3, 00:09:09.087 "num_base_bdevs_operational": 3, 00:09:09.087 "base_bdevs_list": [ 00:09:09.087 { 00:09:09.087 "name": "NewBaseBdev", 00:09:09.087 "uuid": "7f1cfc0d-16c0-4263-aa09-6aaa49a0b6e6", 00:09:09.087 "is_configured": true, 00:09:09.087 "data_offset": 2048, 00:09:09.087 "data_size": 63488 00:09:09.087 }, 00:09:09.087 { 00:09:09.087 "name": "BaseBdev2", 00:09:09.087 "uuid": "b883c190-215b-495c-a4fb-8a6743c2f55f", 00:09:09.087 "is_configured": true, 00:09:09.087 "data_offset": 2048, 00:09:09.087 "data_size": 63488 00:09:09.087 }, 00:09:09.087 { 00:09:09.087 "name": "BaseBdev3", 00:09:09.087 "uuid": "9dfb7ea6-dd47-4afe-a8ee-e64c72804209", 00:09:09.087 "is_configured": true, 00:09:09.087 "data_offset": 2048, 00:09:09.087 "data_size": 63488 00:09:09.087 } 00:09:09.087 ] 00:09:09.087 } 00:09:09.087 } 00:09:09.087 }' 00:09:09.087 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:09.087 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:09.087 BaseBdev2 00:09:09.087 BaseBdev3' 00:09:09.087 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.087 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:09.087 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.087 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:09.087 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.087 03:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.088 03:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.088 03:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.088 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.088 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.088 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.088 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.088 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:09.088 03:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.088 03:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.088 03:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.348 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.348 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.348 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.348 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:09.348 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.348 03:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.348 03:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.348 03:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.348 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.348 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.348 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:09.348 03:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.348 03:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.348 [2024-11-18 03:09:12.719952] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:09.348 [2024-11-18 03:09:12.719997] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:09.348 [2024-11-18 03:09:12.720098] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:09.349 [2024-11-18 03:09:12.720158] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:09.349 [2024-11-18 03:09:12.720171] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:09.349 03:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.349 03:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77498 00:09:09.349 03:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 77498 ']' 00:09:09.349 03:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 77498 00:09:09.349 03:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:09.349 03:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:09.349 03:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77498 00:09:09.349 03:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:09.349 03:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:09.349 03:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77498' 00:09:09.349 killing process with pid 77498 00:09:09.349 03:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 77498 00:09:09.349 [2024-11-18 03:09:12.760172] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:09.349 03:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 77498 00:09:09.349 [2024-11-18 03:09:12.792416] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:09.609 03:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:09.609 00:09:09.609 real 0m8.870s 00:09:09.609 user 0m15.136s 00:09:09.609 sys 0m1.737s 00:09:09.609 ************************************ 00:09:09.609 END TEST raid_state_function_test_sb 00:09:09.609 ************************************ 00:09:09.609 03:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.609 03:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.609 03:09:13 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:09.609 03:09:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:09.609 03:09:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.609 03:09:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:09.609 ************************************ 00:09:09.609 START TEST raid_superblock_test 00:09:09.609 ************************************ 00:09:09.609 03:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:09:09.609 03:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:09.609 03:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:09.609 03:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:09.609 03:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:09.609 03:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:09.609 03:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:09.609 03:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:09.609 03:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:09.609 03:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:09.609 03:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:09.609 03:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:09.609 03:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:09.609 03:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:09.609 03:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:09.609 03:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:09.609 03:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:09.609 03:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:09.609 03:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=78102 00:09:09.609 03:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 78102 00:09:09.609 03:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 78102 ']' 00:09:09.609 03:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.609 03:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:09.609 03:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.609 03:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:09.609 03:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.869 [2024-11-18 03:09:13.189652] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:09.869 [2024-11-18 03:09:13.189994] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78102 ] 00:09:09.869 [2024-11-18 03:09:13.336392] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.869 [2024-11-18 03:09:13.386727] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.869 [2024-11-18 03:09:13.430162] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.869 [2024-11-18 03:09:13.430288] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.809 malloc1 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.809 [2024-11-18 03:09:14.061535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:10.809 [2024-11-18 03:09:14.061690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.809 [2024-11-18 03:09:14.061736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:10.809 [2024-11-18 03:09:14.061775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.809 [2024-11-18 03:09:14.064227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.809 [2024-11-18 03:09:14.064330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:10.809 pt1 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.809 malloc2 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.809 [2024-11-18 03:09:14.105239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:10.809 [2024-11-18 03:09:14.105329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.809 [2024-11-18 03:09:14.105358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:10.809 [2024-11-18 03:09:14.105374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.809 [2024-11-18 03:09:14.108656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.809 [2024-11-18 03:09:14.108779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:10.809 pt2 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.809 malloc3 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.809 [2024-11-18 03:09:14.138316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:10.809 [2024-11-18 03:09:14.138432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.809 [2024-11-18 03:09:14.138470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:10.809 [2024-11-18 03:09:14.138501] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.809 [2024-11-18 03:09:14.140829] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.809 [2024-11-18 03:09:14.140928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:10.809 pt3 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.809 [2024-11-18 03:09:14.150357] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:10.809 [2024-11-18 03:09:14.152429] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:10.809 [2024-11-18 03:09:14.152544] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:10.809 [2024-11-18 03:09:14.152747] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:10.809 [2024-11-18 03:09:14.152794] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:10.809 [2024-11-18 03:09:14.153123] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:10.809 [2024-11-18 03:09:14.153331] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:10.809 [2024-11-18 03:09:14.153390] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:09:10.809 [2024-11-18 03:09:14.153596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.809 "name": "raid_bdev1", 00:09:10.809 "uuid": "5463acda-dddb-498e-8495-1f0e925da7ca", 00:09:10.809 "strip_size_kb": 64, 00:09:10.809 "state": "online", 00:09:10.809 "raid_level": "concat", 00:09:10.809 "superblock": true, 00:09:10.809 "num_base_bdevs": 3, 00:09:10.809 "num_base_bdevs_discovered": 3, 00:09:10.809 "num_base_bdevs_operational": 3, 00:09:10.809 "base_bdevs_list": [ 00:09:10.809 { 00:09:10.809 "name": "pt1", 00:09:10.809 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:10.809 "is_configured": true, 00:09:10.809 "data_offset": 2048, 00:09:10.809 "data_size": 63488 00:09:10.809 }, 00:09:10.809 { 00:09:10.809 "name": "pt2", 00:09:10.809 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:10.809 "is_configured": true, 00:09:10.809 "data_offset": 2048, 00:09:10.809 "data_size": 63488 00:09:10.809 }, 00:09:10.809 { 00:09:10.809 "name": "pt3", 00:09:10.809 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:10.809 "is_configured": true, 00:09:10.809 "data_offset": 2048, 00:09:10.809 "data_size": 63488 00:09:10.809 } 00:09:10.809 ] 00:09:10.809 }' 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.809 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.069 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:11.069 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:11.069 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:11.069 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:11.069 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:11.069 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:11.069 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:11.069 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:11.069 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.069 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.069 [2024-11-18 03:09:14.605893] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.069 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.069 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:11.069 "name": "raid_bdev1", 00:09:11.069 "aliases": [ 00:09:11.069 "5463acda-dddb-498e-8495-1f0e925da7ca" 00:09:11.069 ], 00:09:11.069 "product_name": "Raid Volume", 00:09:11.069 "block_size": 512, 00:09:11.069 "num_blocks": 190464, 00:09:11.069 "uuid": "5463acda-dddb-498e-8495-1f0e925da7ca", 00:09:11.069 "assigned_rate_limits": { 00:09:11.069 "rw_ios_per_sec": 0, 00:09:11.069 "rw_mbytes_per_sec": 0, 00:09:11.069 "r_mbytes_per_sec": 0, 00:09:11.069 "w_mbytes_per_sec": 0 00:09:11.069 }, 00:09:11.069 "claimed": false, 00:09:11.069 "zoned": false, 00:09:11.069 "supported_io_types": { 00:09:11.069 "read": true, 00:09:11.069 "write": true, 00:09:11.069 "unmap": true, 00:09:11.069 "flush": true, 00:09:11.069 "reset": true, 00:09:11.069 "nvme_admin": false, 00:09:11.069 "nvme_io": false, 00:09:11.069 "nvme_io_md": false, 00:09:11.069 "write_zeroes": true, 00:09:11.069 "zcopy": false, 00:09:11.069 "get_zone_info": false, 00:09:11.069 "zone_management": false, 00:09:11.069 "zone_append": false, 00:09:11.069 "compare": false, 00:09:11.069 "compare_and_write": false, 00:09:11.069 "abort": false, 00:09:11.069 "seek_hole": false, 00:09:11.069 "seek_data": false, 00:09:11.069 "copy": false, 00:09:11.069 "nvme_iov_md": false 00:09:11.069 }, 00:09:11.069 "memory_domains": [ 00:09:11.069 { 00:09:11.069 "dma_device_id": "system", 00:09:11.069 "dma_device_type": 1 00:09:11.069 }, 00:09:11.069 { 00:09:11.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.069 "dma_device_type": 2 00:09:11.069 }, 00:09:11.069 { 00:09:11.069 "dma_device_id": "system", 00:09:11.069 "dma_device_type": 1 00:09:11.069 }, 00:09:11.069 { 00:09:11.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.069 "dma_device_type": 2 00:09:11.069 }, 00:09:11.069 { 00:09:11.069 "dma_device_id": "system", 00:09:11.069 "dma_device_type": 1 00:09:11.069 }, 00:09:11.069 { 00:09:11.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.069 "dma_device_type": 2 00:09:11.069 } 00:09:11.069 ], 00:09:11.069 "driver_specific": { 00:09:11.069 "raid": { 00:09:11.069 "uuid": "5463acda-dddb-498e-8495-1f0e925da7ca", 00:09:11.069 "strip_size_kb": 64, 00:09:11.069 "state": "online", 00:09:11.069 "raid_level": "concat", 00:09:11.069 "superblock": true, 00:09:11.069 "num_base_bdevs": 3, 00:09:11.069 "num_base_bdevs_discovered": 3, 00:09:11.069 "num_base_bdevs_operational": 3, 00:09:11.069 "base_bdevs_list": [ 00:09:11.069 { 00:09:11.069 "name": "pt1", 00:09:11.069 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:11.069 "is_configured": true, 00:09:11.069 "data_offset": 2048, 00:09:11.069 "data_size": 63488 00:09:11.069 }, 00:09:11.069 { 00:09:11.069 "name": "pt2", 00:09:11.069 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:11.069 "is_configured": true, 00:09:11.069 "data_offset": 2048, 00:09:11.069 "data_size": 63488 00:09:11.069 }, 00:09:11.069 { 00:09:11.069 "name": "pt3", 00:09:11.069 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:11.069 "is_configured": true, 00:09:11.069 "data_offset": 2048, 00:09:11.069 "data_size": 63488 00:09:11.069 } 00:09:11.069 ] 00:09:11.069 } 00:09:11.069 } 00:09:11.069 }' 00:09:11.328 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:11.328 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:11.328 pt2 00:09:11.328 pt3' 00:09:11.328 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.328 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:11.328 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.328 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:11.328 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.328 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.328 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.328 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.328 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.328 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.328 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.328 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:11.328 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.328 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.328 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.328 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.328 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.328 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.328 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.328 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.328 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:11.328 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.328 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.328 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.328 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.328 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.589 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:11.589 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:11.589 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.589 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.589 [2024-11-18 03:09:14.909398] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.589 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.589 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5463acda-dddb-498e-8495-1f0e925da7ca 00:09:11.589 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5463acda-dddb-498e-8495-1f0e925da7ca ']' 00:09:11.589 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:11.589 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.589 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.589 [2024-11-18 03:09:14.956983] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:11.589 [2024-11-18 03:09:14.957013] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.589 [2024-11-18 03:09:14.957106] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.589 [2024-11-18 03:09:14.957173] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:11.589 [2024-11-18 03:09:14.957189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:09:11.589 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.589 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.589 03:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:11.589 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.589 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.589 03:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.589 [2024-11-18 03:09:15.104743] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:11.589 [2024-11-18 03:09:15.106841] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:11.589 [2024-11-18 03:09:15.106895] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:11.589 [2024-11-18 03:09:15.106950] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:11.589 [2024-11-18 03:09:15.107014] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:11.589 [2024-11-18 03:09:15.107035] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:11.589 [2024-11-18 03:09:15.107050] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:11.589 [2024-11-18 03:09:15.107071] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:09:11.589 request: 00:09:11.589 { 00:09:11.589 "name": "raid_bdev1", 00:09:11.589 "raid_level": "concat", 00:09:11.589 "base_bdevs": [ 00:09:11.589 "malloc1", 00:09:11.589 "malloc2", 00:09:11.589 "malloc3" 00:09:11.589 ], 00:09:11.589 "strip_size_kb": 64, 00:09:11.589 "superblock": false, 00:09:11.589 "method": "bdev_raid_create", 00:09:11.589 "req_id": 1 00:09:11.589 } 00:09:11.589 Got JSON-RPC error response 00:09:11.589 response: 00:09:11.589 { 00:09:11.589 "code": -17, 00:09:11.589 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:11.589 } 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:11.589 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.590 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.590 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:11.590 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.590 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.850 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:11.850 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:11.850 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:11.850 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.850 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.850 [2024-11-18 03:09:15.172598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:11.850 [2024-11-18 03:09:15.172728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.850 [2024-11-18 03:09:15.172768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:11.850 [2024-11-18 03:09:15.172801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.850 [2024-11-18 03:09:15.175197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.850 [2024-11-18 03:09:15.175289] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:11.850 [2024-11-18 03:09:15.175401] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:11.850 [2024-11-18 03:09:15.175494] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:11.850 pt1 00:09:11.850 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.850 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:11.850 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:11.851 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.851 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.851 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.851 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.851 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.851 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.851 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.851 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.851 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.851 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.851 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.851 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.851 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.851 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.851 "name": "raid_bdev1", 00:09:11.851 "uuid": "5463acda-dddb-498e-8495-1f0e925da7ca", 00:09:11.851 "strip_size_kb": 64, 00:09:11.851 "state": "configuring", 00:09:11.851 "raid_level": "concat", 00:09:11.851 "superblock": true, 00:09:11.851 "num_base_bdevs": 3, 00:09:11.851 "num_base_bdevs_discovered": 1, 00:09:11.851 "num_base_bdevs_operational": 3, 00:09:11.851 "base_bdevs_list": [ 00:09:11.851 { 00:09:11.851 "name": "pt1", 00:09:11.851 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:11.851 "is_configured": true, 00:09:11.851 "data_offset": 2048, 00:09:11.851 "data_size": 63488 00:09:11.851 }, 00:09:11.851 { 00:09:11.851 "name": null, 00:09:11.851 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:11.851 "is_configured": false, 00:09:11.851 "data_offset": 2048, 00:09:11.851 "data_size": 63488 00:09:11.851 }, 00:09:11.851 { 00:09:11.851 "name": null, 00:09:11.851 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:11.851 "is_configured": false, 00:09:11.851 "data_offset": 2048, 00:09:11.851 "data_size": 63488 00:09:11.851 } 00:09:11.851 ] 00:09:11.851 }' 00:09:11.851 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.851 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.112 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:12.112 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:12.112 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.112 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.112 [2024-11-18 03:09:15.611876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:12.112 [2024-11-18 03:09:15.612054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.112 [2024-11-18 03:09:15.612084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:12.112 [2024-11-18 03:09:15.612099] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.112 [2024-11-18 03:09:15.612527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.112 [2024-11-18 03:09:15.612550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:12.112 [2024-11-18 03:09:15.612627] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:12.112 [2024-11-18 03:09:15.612652] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:12.112 pt2 00:09:12.112 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.112 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:12.112 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.112 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.112 [2024-11-18 03:09:15.619872] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:12.112 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.112 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:12.112 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:12.112 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.112 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.112 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.112 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.112 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.112 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.112 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.112 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.112 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.112 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:12.112 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.112 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.112 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.112 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.112 "name": "raid_bdev1", 00:09:12.112 "uuid": "5463acda-dddb-498e-8495-1f0e925da7ca", 00:09:12.112 "strip_size_kb": 64, 00:09:12.112 "state": "configuring", 00:09:12.112 "raid_level": "concat", 00:09:12.112 "superblock": true, 00:09:12.112 "num_base_bdevs": 3, 00:09:12.112 "num_base_bdevs_discovered": 1, 00:09:12.112 "num_base_bdevs_operational": 3, 00:09:12.112 "base_bdevs_list": [ 00:09:12.112 { 00:09:12.112 "name": "pt1", 00:09:12.112 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:12.112 "is_configured": true, 00:09:12.112 "data_offset": 2048, 00:09:12.112 "data_size": 63488 00:09:12.112 }, 00:09:12.112 { 00:09:12.112 "name": null, 00:09:12.112 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:12.112 "is_configured": false, 00:09:12.112 "data_offset": 0, 00:09:12.112 "data_size": 63488 00:09:12.112 }, 00:09:12.112 { 00:09:12.112 "name": null, 00:09:12.112 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:12.112 "is_configured": false, 00:09:12.112 "data_offset": 2048, 00:09:12.112 "data_size": 63488 00:09:12.112 } 00:09:12.112 ] 00:09:12.112 }' 00:09:12.112 03:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.112 03:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.682 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:12.682 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:12.682 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:12.682 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.682 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.682 [2024-11-18 03:09:16.039216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:12.682 [2024-11-18 03:09:16.039351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.682 [2024-11-18 03:09:16.039392] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:12.682 [2024-11-18 03:09:16.039419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.682 [2024-11-18 03:09:16.039884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.682 [2024-11-18 03:09:16.039947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:12.682 [2024-11-18 03:09:16.040081] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:12.682 [2024-11-18 03:09:16.040134] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:12.682 pt2 00:09:12.683 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.683 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:12.683 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:12.683 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:12.683 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.683 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.683 [2024-11-18 03:09:16.051122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:12.683 [2024-11-18 03:09:16.051179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.683 [2024-11-18 03:09:16.051200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:12.683 [2024-11-18 03:09:16.051208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.683 [2024-11-18 03:09:16.051556] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.683 [2024-11-18 03:09:16.051572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:12.683 [2024-11-18 03:09:16.051633] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:12.683 [2024-11-18 03:09:16.051651] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:12.683 [2024-11-18 03:09:16.051743] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:12.683 [2024-11-18 03:09:16.051751] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:12.683 [2024-11-18 03:09:16.051995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:12.683 [2024-11-18 03:09:16.052120] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:12.683 [2024-11-18 03:09:16.052133] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:12.683 [2024-11-18 03:09:16.052236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.683 pt3 00:09:12.683 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.683 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:12.683 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:12.683 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:12.683 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:12.683 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.683 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.683 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.683 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.683 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.683 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.683 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.683 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.683 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.683 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:12.683 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.683 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.683 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.683 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.683 "name": "raid_bdev1", 00:09:12.683 "uuid": "5463acda-dddb-498e-8495-1f0e925da7ca", 00:09:12.683 "strip_size_kb": 64, 00:09:12.683 "state": "online", 00:09:12.683 "raid_level": "concat", 00:09:12.683 "superblock": true, 00:09:12.683 "num_base_bdevs": 3, 00:09:12.683 "num_base_bdevs_discovered": 3, 00:09:12.683 "num_base_bdevs_operational": 3, 00:09:12.683 "base_bdevs_list": [ 00:09:12.683 { 00:09:12.683 "name": "pt1", 00:09:12.683 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:12.683 "is_configured": true, 00:09:12.683 "data_offset": 2048, 00:09:12.683 "data_size": 63488 00:09:12.683 }, 00:09:12.683 { 00:09:12.683 "name": "pt2", 00:09:12.683 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:12.683 "is_configured": true, 00:09:12.683 "data_offset": 2048, 00:09:12.683 "data_size": 63488 00:09:12.683 }, 00:09:12.683 { 00:09:12.683 "name": "pt3", 00:09:12.683 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:12.683 "is_configured": true, 00:09:12.683 "data_offset": 2048, 00:09:12.683 "data_size": 63488 00:09:12.683 } 00:09:12.683 ] 00:09:12.683 }' 00:09:12.683 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.683 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.942 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:12.942 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:12.942 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:12.942 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:12.942 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:12.942 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:12.942 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:12.942 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.942 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.942 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:12.942 [2024-11-18 03:09:16.506721] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:13.201 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.201 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:13.201 "name": "raid_bdev1", 00:09:13.201 "aliases": [ 00:09:13.201 "5463acda-dddb-498e-8495-1f0e925da7ca" 00:09:13.201 ], 00:09:13.201 "product_name": "Raid Volume", 00:09:13.201 "block_size": 512, 00:09:13.201 "num_blocks": 190464, 00:09:13.201 "uuid": "5463acda-dddb-498e-8495-1f0e925da7ca", 00:09:13.201 "assigned_rate_limits": { 00:09:13.201 "rw_ios_per_sec": 0, 00:09:13.201 "rw_mbytes_per_sec": 0, 00:09:13.201 "r_mbytes_per_sec": 0, 00:09:13.201 "w_mbytes_per_sec": 0 00:09:13.201 }, 00:09:13.201 "claimed": false, 00:09:13.201 "zoned": false, 00:09:13.201 "supported_io_types": { 00:09:13.201 "read": true, 00:09:13.201 "write": true, 00:09:13.201 "unmap": true, 00:09:13.201 "flush": true, 00:09:13.201 "reset": true, 00:09:13.201 "nvme_admin": false, 00:09:13.201 "nvme_io": false, 00:09:13.201 "nvme_io_md": false, 00:09:13.201 "write_zeroes": true, 00:09:13.201 "zcopy": false, 00:09:13.201 "get_zone_info": false, 00:09:13.201 "zone_management": false, 00:09:13.201 "zone_append": false, 00:09:13.201 "compare": false, 00:09:13.201 "compare_and_write": false, 00:09:13.201 "abort": false, 00:09:13.201 "seek_hole": false, 00:09:13.201 "seek_data": false, 00:09:13.201 "copy": false, 00:09:13.201 "nvme_iov_md": false 00:09:13.201 }, 00:09:13.201 "memory_domains": [ 00:09:13.201 { 00:09:13.201 "dma_device_id": "system", 00:09:13.201 "dma_device_type": 1 00:09:13.201 }, 00:09:13.201 { 00:09:13.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.201 "dma_device_type": 2 00:09:13.201 }, 00:09:13.201 { 00:09:13.201 "dma_device_id": "system", 00:09:13.201 "dma_device_type": 1 00:09:13.201 }, 00:09:13.201 { 00:09:13.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.201 "dma_device_type": 2 00:09:13.201 }, 00:09:13.201 { 00:09:13.201 "dma_device_id": "system", 00:09:13.201 "dma_device_type": 1 00:09:13.201 }, 00:09:13.201 { 00:09:13.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.201 "dma_device_type": 2 00:09:13.201 } 00:09:13.201 ], 00:09:13.202 "driver_specific": { 00:09:13.202 "raid": { 00:09:13.202 "uuid": "5463acda-dddb-498e-8495-1f0e925da7ca", 00:09:13.202 "strip_size_kb": 64, 00:09:13.202 "state": "online", 00:09:13.202 "raid_level": "concat", 00:09:13.202 "superblock": true, 00:09:13.202 "num_base_bdevs": 3, 00:09:13.202 "num_base_bdevs_discovered": 3, 00:09:13.202 "num_base_bdevs_operational": 3, 00:09:13.202 "base_bdevs_list": [ 00:09:13.202 { 00:09:13.202 "name": "pt1", 00:09:13.202 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:13.202 "is_configured": true, 00:09:13.202 "data_offset": 2048, 00:09:13.202 "data_size": 63488 00:09:13.202 }, 00:09:13.202 { 00:09:13.202 "name": "pt2", 00:09:13.202 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:13.202 "is_configured": true, 00:09:13.202 "data_offset": 2048, 00:09:13.202 "data_size": 63488 00:09:13.202 }, 00:09:13.202 { 00:09:13.202 "name": "pt3", 00:09:13.202 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:13.202 "is_configured": true, 00:09:13.202 "data_offset": 2048, 00:09:13.202 "data_size": 63488 00:09:13.202 } 00:09:13.202 ] 00:09:13.202 } 00:09:13.202 } 00:09:13.202 }' 00:09:13.202 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:13.202 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:13.202 pt2 00:09:13.202 pt3' 00:09:13.202 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.202 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:13.202 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.202 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:13.202 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.202 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.202 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.202 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.202 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.202 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.202 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.202 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:13.202 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.202 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.202 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.202 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.202 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.202 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.202 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.202 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:13.202 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.202 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.202 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.480 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.480 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.480 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.480 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:13.480 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:13.480 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.480 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.480 [2024-11-18 03:09:16.814307] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:13.480 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.480 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5463acda-dddb-498e-8495-1f0e925da7ca '!=' 5463acda-dddb-498e-8495-1f0e925da7ca ']' 00:09:13.480 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:13.481 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:13.481 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:13.481 03:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 78102 00:09:13.481 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 78102 ']' 00:09:13.481 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 78102 00:09:13.481 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:13.481 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:13.481 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78102 00:09:13.481 killing process with pid 78102 00:09:13.481 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:13.481 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:13.481 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78102' 00:09:13.481 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 78102 00:09:13.481 [2024-11-18 03:09:16.904014] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:13.481 [2024-11-18 03:09:16.904121] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:13.481 [2024-11-18 03:09:16.904191] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:13.481 [2024-11-18 03:09:16.904202] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:13.481 03:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 78102 00:09:13.481 [2024-11-18 03:09:16.939766] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:13.759 03:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:13.759 00:09:13.759 real 0m4.085s 00:09:13.759 user 0m6.437s 00:09:13.759 sys 0m0.870s 00:09:13.759 ************************************ 00:09:13.759 END TEST raid_superblock_test 00:09:13.759 ************************************ 00:09:13.759 03:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.759 03:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.759 03:09:17 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:13.759 03:09:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:13.759 03:09:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.759 03:09:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:13.759 ************************************ 00:09:13.759 START TEST raid_read_error_test 00:09:13.759 ************************************ 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kotKoqUUux 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78344 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78344 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 78344 ']' 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:13.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:13.759 03:09:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.025 [2024-11-18 03:09:17.353311] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:14.025 [2024-11-18 03:09:17.353448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78344 ] 00:09:14.025 [2024-11-18 03:09:17.515192] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.025 [2024-11-18 03:09:17.566374] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.284 [2024-11-18 03:09:17.610225] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.284 [2024-11-18 03:09:17.610267] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.852 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:14.852 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:14.852 03:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:14.852 03:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:14.852 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.852 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.852 BaseBdev1_malloc 00:09:14.852 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.852 03:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:14.852 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.852 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.852 true 00:09:14.852 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.852 03:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:14.852 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.852 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.852 [2024-11-18 03:09:18.229411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:14.852 [2024-11-18 03:09:18.229477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.852 [2024-11-18 03:09:18.229501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:14.852 [2024-11-18 03:09:18.229510] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.852 [2024-11-18 03:09:18.231886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.852 [2024-11-18 03:09:18.231934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:14.852 BaseBdev1 00:09:14.852 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.852 03:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:14.852 03:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:14.852 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.852 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.852 BaseBdev2_malloc 00:09:14.852 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.852 03:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:14.852 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.852 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.852 true 00:09:14.852 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.852 03:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.853 [2024-11-18 03:09:18.281137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:14.853 [2024-11-18 03:09:18.281193] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.853 [2024-11-18 03:09:18.281229] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:14.853 [2024-11-18 03:09:18.281237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.853 [2024-11-18 03:09:18.283452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.853 [2024-11-18 03:09:18.283494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:14.853 BaseBdev2 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.853 BaseBdev3_malloc 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.853 true 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.853 [2024-11-18 03:09:18.322084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:14.853 [2024-11-18 03:09:18.322140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.853 [2024-11-18 03:09:18.322164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:14.853 [2024-11-18 03:09:18.322173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.853 [2024-11-18 03:09:18.324477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.853 [2024-11-18 03:09:18.324566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:14.853 BaseBdev3 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.853 [2024-11-18 03:09:18.334153] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.853 [2024-11-18 03:09:18.336217] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:14.853 [2024-11-18 03:09:18.336311] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:14.853 [2024-11-18 03:09:18.336511] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:14.853 [2024-11-18 03:09:18.336530] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:14.853 [2024-11-18 03:09:18.336810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:14.853 [2024-11-18 03:09:18.336934] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:14.853 [2024-11-18 03:09:18.336943] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:14.853 [2024-11-18 03:09:18.337098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.853 "name": "raid_bdev1", 00:09:14.853 "uuid": "0332c7c9-b3f6-4181-8885-3424d553610a", 00:09:14.853 "strip_size_kb": 64, 00:09:14.853 "state": "online", 00:09:14.853 "raid_level": "concat", 00:09:14.853 "superblock": true, 00:09:14.853 "num_base_bdevs": 3, 00:09:14.853 "num_base_bdevs_discovered": 3, 00:09:14.853 "num_base_bdevs_operational": 3, 00:09:14.853 "base_bdevs_list": [ 00:09:14.853 { 00:09:14.853 "name": "BaseBdev1", 00:09:14.853 "uuid": "5bf4491c-f2e8-5304-a1ef-5eb4c8a425cf", 00:09:14.853 "is_configured": true, 00:09:14.853 "data_offset": 2048, 00:09:14.853 "data_size": 63488 00:09:14.853 }, 00:09:14.853 { 00:09:14.853 "name": "BaseBdev2", 00:09:14.853 "uuid": "1b35d753-294e-5537-8b23-6b43a4b1c178", 00:09:14.853 "is_configured": true, 00:09:14.853 "data_offset": 2048, 00:09:14.853 "data_size": 63488 00:09:14.853 }, 00:09:14.853 { 00:09:14.853 "name": "BaseBdev3", 00:09:14.853 "uuid": "863ea1e7-5070-5356-80a2-a8da50123f33", 00:09:14.853 "is_configured": true, 00:09:14.853 "data_offset": 2048, 00:09:14.853 "data_size": 63488 00:09:14.853 } 00:09:14.853 ] 00:09:14.853 }' 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.853 03:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.421 03:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:15.421 03:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:15.421 [2024-11-18 03:09:18.865599] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:16.359 03:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:16.359 03:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.359 03:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.359 03:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.359 03:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:16.359 03:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:16.359 03:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:16.359 03:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:16.359 03:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:16.359 03:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.359 03:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:16.359 03:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.359 03:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.359 03:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.359 03:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.359 03:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.359 03:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.359 03:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.359 03:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.359 03:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:16.359 03:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.359 03:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.359 03:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.359 "name": "raid_bdev1", 00:09:16.359 "uuid": "0332c7c9-b3f6-4181-8885-3424d553610a", 00:09:16.359 "strip_size_kb": 64, 00:09:16.359 "state": "online", 00:09:16.359 "raid_level": "concat", 00:09:16.359 "superblock": true, 00:09:16.359 "num_base_bdevs": 3, 00:09:16.359 "num_base_bdevs_discovered": 3, 00:09:16.359 "num_base_bdevs_operational": 3, 00:09:16.359 "base_bdevs_list": [ 00:09:16.359 { 00:09:16.359 "name": "BaseBdev1", 00:09:16.359 "uuid": "5bf4491c-f2e8-5304-a1ef-5eb4c8a425cf", 00:09:16.359 "is_configured": true, 00:09:16.359 "data_offset": 2048, 00:09:16.359 "data_size": 63488 00:09:16.359 }, 00:09:16.359 { 00:09:16.359 "name": "BaseBdev2", 00:09:16.359 "uuid": "1b35d753-294e-5537-8b23-6b43a4b1c178", 00:09:16.359 "is_configured": true, 00:09:16.359 "data_offset": 2048, 00:09:16.359 "data_size": 63488 00:09:16.359 }, 00:09:16.359 { 00:09:16.359 "name": "BaseBdev3", 00:09:16.359 "uuid": "863ea1e7-5070-5356-80a2-a8da50123f33", 00:09:16.359 "is_configured": true, 00:09:16.359 "data_offset": 2048, 00:09:16.359 "data_size": 63488 00:09:16.359 } 00:09:16.359 ] 00:09:16.359 }' 00:09:16.359 03:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.359 03:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.617 03:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:16.617 03:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.617 03:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.617 [2024-11-18 03:09:20.181473] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:16.617 [2024-11-18 03:09:20.181587] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.617 [2024-11-18 03:09:20.184582] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.617 [2024-11-18 03:09:20.184691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:16.617 [2024-11-18 03:09:20.184750] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:16.617 [2024-11-18 03:09:20.184813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:16.617 { 00:09:16.618 "results": [ 00:09:16.618 { 00:09:16.618 "job": "raid_bdev1", 00:09:16.618 "core_mask": "0x1", 00:09:16.618 "workload": "randrw", 00:09:16.618 "percentage": 50, 00:09:16.618 "status": "finished", 00:09:16.618 "queue_depth": 1, 00:09:16.618 "io_size": 131072, 00:09:16.618 "runtime": 1.316554, 00:09:16.618 "iops": 15793.503342817688, 00:09:16.618 "mibps": 1974.187917852211, 00:09:16.618 "io_failed": 1, 00:09:16.618 "io_timeout": 0, 00:09:16.618 "avg_latency_us": 87.72031233396599, 00:09:16.618 "min_latency_us": 26.270742358078603, 00:09:16.618 "max_latency_us": 1488.1537117903931 00:09:16.618 } 00:09:16.618 ], 00:09:16.618 "core_count": 1 00:09:16.618 } 00:09:16.618 03:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.618 03:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78344 00:09:16.618 03:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 78344 ']' 00:09:16.618 03:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 78344 00:09:16.618 03:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:16.876 03:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:16.876 03:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78344 00:09:16.876 03:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:16.876 03:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:16.876 03:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78344' 00:09:16.876 killing process with pid 78344 00:09:16.876 03:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 78344 00:09:16.876 [2024-11-18 03:09:20.227304] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:16.876 03:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 78344 00:09:16.876 [2024-11-18 03:09:20.254350] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:17.135 03:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kotKoqUUux 00:09:17.135 03:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:17.135 03:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:17.135 03:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:09:17.135 03:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:17.135 03:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:17.135 03:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:17.135 03:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:09:17.135 00:09:17.135 real 0m3.248s 00:09:17.135 user 0m4.064s 00:09:17.135 sys 0m0.524s 00:09:17.135 ************************************ 00:09:17.135 03:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:17.135 03:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.135 END TEST raid_read_error_test 00:09:17.135 ************************************ 00:09:17.135 03:09:20 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:17.135 03:09:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:17.135 03:09:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:17.135 03:09:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:17.135 ************************************ 00:09:17.135 START TEST raid_write_error_test 00:09:17.135 ************************************ 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.KBJiDY8dge 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78473 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78473 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 78473 ']' 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:17.135 03:09:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.135 [2024-11-18 03:09:20.671233] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:17.135 [2024-11-18 03:09:20.671461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78473 ] 00:09:17.395 [2024-11-18 03:09:20.833258] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.395 [2024-11-18 03:09:20.884279] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.395 [2024-11-18 03:09:20.926842] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.395 [2024-11-18 03:09:20.926880] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.964 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:17.964 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:17.964 03:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:17.964 03:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:17.964 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.964 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.964 BaseBdev1_malloc 00:09:17.964 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.964 03:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:17.964 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.964 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.224 true 00:09:18.224 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.224 03:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:18.224 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.224 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.224 [2024-11-18 03:09:21.545517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:18.224 [2024-11-18 03:09:21.545591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.224 [2024-11-18 03:09:21.545617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:18.224 [2024-11-18 03:09:21.545626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.224 [2024-11-18 03:09:21.548066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.224 [2024-11-18 03:09:21.548112] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:18.224 BaseBdev1 00:09:18.224 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.224 03:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:18.224 03:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:18.224 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.224 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.224 BaseBdev2_malloc 00:09:18.224 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.224 03:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:18.224 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.224 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.224 true 00:09:18.224 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.224 03:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:18.224 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.224 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.224 [2024-11-18 03:09:21.595546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:18.224 [2024-11-18 03:09:21.595663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.224 [2024-11-18 03:09:21.595710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:18.224 [2024-11-18 03:09:21.595720] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.224 [2024-11-18 03:09:21.598068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.224 [2024-11-18 03:09:21.598107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:18.224 BaseBdev2 00:09:18.224 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.224 03:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.225 BaseBdev3_malloc 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.225 true 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.225 [2024-11-18 03:09:21.636480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:18.225 [2024-11-18 03:09:21.636541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.225 [2024-11-18 03:09:21.636564] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:18.225 [2024-11-18 03:09:21.636572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.225 [2024-11-18 03:09:21.638877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.225 [2024-11-18 03:09:21.639018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:18.225 BaseBdev3 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.225 [2024-11-18 03:09:21.648528] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.225 [2024-11-18 03:09:21.650588] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:18.225 [2024-11-18 03:09:21.650680] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:18.225 [2024-11-18 03:09:21.650871] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:18.225 [2024-11-18 03:09:21.650886] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:18.225 [2024-11-18 03:09:21.651188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:18.225 [2024-11-18 03:09:21.651328] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:18.225 [2024-11-18 03:09:21.651361] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:18.225 [2024-11-18 03:09:21.651522] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.225 "name": "raid_bdev1", 00:09:18.225 "uuid": "83153c3a-a1e2-482a-a0f8-ed2dafe06e0e", 00:09:18.225 "strip_size_kb": 64, 00:09:18.225 "state": "online", 00:09:18.225 "raid_level": "concat", 00:09:18.225 "superblock": true, 00:09:18.225 "num_base_bdevs": 3, 00:09:18.225 "num_base_bdevs_discovered": 3, 00:09:18.225 "num_base_bdevs_operational": 3, 00:09:18.225 "base_bdevs_list": [ 00:09:18.225 { 00:09:18.225 "name": "BaseBdev1", 00:09:18.225 "uuid": "adba459c-131d-5edd-beda-376c5d83e11c", 00:09:18.225 "is_configured": true, 00:09:18.225 "data_offset": 2048, 00:09:18.225 "data_size": 63488 00:09:18.225 }, 00:09:18.225 { 00:09:18.225 "name": "BaseBdev2", 00:09:18.225 "uuid": "30a44a8b-be4f-513b-9c9f-a34acd45300c", 00:09:18.225 "is_configured": true, 00:09:18.225 "data_offset": 2048, 00:09:18.225 "data_size": 63488 00:09:18.225 }, 00:09:18.225 { 00:09:18.225 "name": "BaseBdev3", 00:09:18.225 "uuid": "1ab86458-170a-5f9a-9715-0552a89c52c1", 00:09:18.225 "is_configured": true, 00:09:18.225 "data_offset": 2048, 00:09:18.225 "data_size": 63488 00:09:18.225 } 00:09:18.225 ] 00:09:18.225 }' 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.225 03:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.484 03:09:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:18.484 03:09:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:18.742 [2024-11-18 03:09:22.148093] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:19.682 03:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:19.682 03:09:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.682 03:09:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.682 03:09:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.682 03:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:19.682 03:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:19.682 03:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:19.682 03:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:19.682 03:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.682 03:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.682 03:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.682 03:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.682 03:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.682 03:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.682 03:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.682 03:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.682 03:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.682 03:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.682 03:09:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.682 03:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.682 03:09:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.682 03:09:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.682 03:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.682 "name": "raid_bdev1", 00:09:19.682 "uuid": "83153c3a-a1e2-482a-a0f8-ed2dafe06e0e", 00:09:19.682 "strip_size_kb": 64, 00:09:19.682 "state": "online", 00:09:19.682 "raid_level": "concat", 00:09:19.682 "superblock": true, 00:09:19.682 "num_base_bdevs": 3, 00:09:19.682 "num_base_bdevs_discovered": 3, 00:09:19.682 "num_base_bdevs_operational": 3, 00:09:19.682 "base_bdevs_list": [ 00:09:19.682 { 00:09:19.682 "name": "BaseBdev1", 00:09:19.682 "uuid": "adba459c-131d-5edd-beda-376c5d83e11c", 00:09:19.682 "is_configured": true, 00:09:19.682 "data_offset": 2048, 00:09:19.682 "data_size": 63488 00:09:19.682 }, 00:09:19.682 { 00:09:19.682 "name": "BaseBdev2", 00:09:19.682 "uuid": "30a44a8b-be4f-513b-9c9f-a34acd45300c", 00:09:19.682 "is_configured": true, 00:09:19.682 "data_offset": 2048, 00:09:19.682 "data_size": 63488 00:09:19.682 }, 00:09:19.682 { 00:09:19.683 "name": "BaseBdev3", 00:09:19.683 "uuid": "1ab86458-170a-5f9a-9715-0552a89c52c1", 00:09:19.683 "is_configured": true, 00:09:19.683 "data_offset": 2048, 00:09:19.683 "data_size": 63488 00:09:19.683 } 00:09:19.683 ] 00:09:19.683 }' 00:09:19.683 03:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.683 03:09:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.253 03:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:20.253 03:09:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.253 03:09:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.253 [2024-11-18 03:09:23.544726] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:20.253 [2024-11-18 03:09:23.544832] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.253 [2024-11-18 03:09:23.547401] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.253 [2024-11-18 03:09:23.547497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.253 [2024-11-18 03:09:23.547580] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:20.253 [2024-11-18 03:09:23.547632] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:20.253 03:09:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.253 { 00:09:20.253 "results": [ 00:09:20.253 { 00:09:20.253 "job": "raid_bdev1", 00:09:20.253 "core_mask": "0x1", 00:09:20.253 "workload": "randrw", 00:09:20.253 "percentage": 50, 00:09:20.253 "status": "finished", 00:09:20.253 "queue_depth": 1, 00:09:20.253 "io_size": 131072, 00:09:20.253 "runtime": 1.397268, 00:09:20.253 "iops": 15714.952321244027, 00:09:20.253 "mibps": 1964.3690401555034, 00:09:20.253 "io_failed": 1, 00:09:20.253 "io_timeout": 0, 00:09:20.253 "avg_latency_us": 88.19604268455046, 00:09:20.253 "min_latency_us": 26.382532751091702, 00:09:20.253 "max_latency_us": 1831.5737991266376 00:09:20.253 } 00:09:20.253 ], 00:09:20.253 "core_count": 1 00:09:20.253 } 00:09:20.253 03:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78473 00:09:20.253 03:09:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 78473 ']' 00:09:20.253 03:09:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 78473 00:09:20.253 03:09:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:20.253 03:09:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:20.253 03:09:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78473 00:09:20.253 killing process with pid 78473 00:09:20.253 03:09:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:20.253 03:09:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:20.253 03:09:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78473' 00:09:20.253 03:09:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 78473 00:09:20.253 03:09:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 78473 00:09:20.253 [2024-11-18 03:09:23.593426] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:20.253 [2024-11-18 03:09:23.620142] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:20.513 03:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.KBJiDY8dge 00:09:20.513 03:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:20.513 03:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:20.513 03:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:20.513 03:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:20.513 03:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:20.513 03:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:20.513 03:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:20.513 00:09:20.513 real 0m3.294s 00:09:20.513 user 0m4.181s 00:09:20.513 sys 0m0.521s 00:09:20.513 ************************************ 00:09:20.513 END TEST raid_write_error_test 00:09:20.513 ************************************ 00:09:20.513 03:09:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.513 03:09:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.513 03:09:23 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:20.513 03:09:23 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:20.513 03:09:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:20.513 03:09:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.513 03:09:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:20.513 ************************************ 00:09:20.513 START TEST raid_state_function_test 00:09:20.513 ************************************ 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78606 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78606' 00:09:20.513 Process raid pid: 78606 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78606 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 78606 ']' 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:20.513 03:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.513 [2024-11-18 03:09:24.032590] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:20.513 [2024-11-18 03:09:24.032723] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.773 [2024-11-18 03:09:24.191560] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.773 [2024-11-18 03:09:24.242781] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.773 [2024-11-18 03:09:24.286091] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.773 [2024-11-18 03:09:24.286131] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.342 03:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:21.342 03:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:21.342 03:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:21.342 03:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.342 03:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.342 [2024-11-18 03:09:24.888525] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:21.342 [2024-11-18 03:09:24.888585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:21.342 [2024-11-18 03:09:24.888597] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:21.342 [2024-11-18 03:09:24.888607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:21.342 [2024-11-18 03:09:24.888614] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:21.342 [2024-11-18 03:09:24.888627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:21.343 03:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.343 03:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:21.343 03:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.343 03:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.343 03:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:21.343 03:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:21.343 03:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.343 03:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.343 03:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.343 03:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.343 03:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.343 03:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.343 03:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.343 03:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.343 03:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.602 03:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.602 03:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.602 "name": "Existed_Raid", 00:09:21.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.603 "strip_size_kb": 0, 00:09:21.603 "state": "configuring", 00:09:21.603 "raid_level": "raid1", 00:09:21.603 "superblock": false, 00:09:21.603 "num_base_bdevs": 3, 00:09:21.603 "num_base_bdevs_discovered": 0, 00:09:21.603 "num_base_bdevs_operational": 3, 00:09:21.603 "base_bdevs_list": [ 00:09:21.603 { 00:09:21.603 "name": "BaseBdev1", 00:09:21.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.603 "is_configured": false, 00:09:21.603 "data_offset": 0, 00:09:21.603 "data_size": 0 00:09:21.603 }, 00:09:21.603 { 00:09:21.603 "name": "BaseBdev2", 00:09:21.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.603 "is_configured": false, 00:09:21.603 "data_offset": 0, 00:09:21.603 "data_size": 0 00:09:21.603 }, 00:09:21.603 { 00:09:21.603 "name": "BaseBdev3", 00:09:21.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.603 "is_configured": false, 00:09:21.603 "data_offset": 0, 00:09:21.603 "data_size": 0 00:09:21.603 } 00:09:21.603 ] 00:09:21.603 }' 00:09:21.603 03:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.603 03:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.865 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:21.865 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.865 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.865 [2024-11-18 03:09:25.319672] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:21.865 [2024-11-18 03:09:25.319790] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:21.865 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.865 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:21.865 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.865 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.865 [2024-11-18 03:09:25.331681] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:21.865 [2024-11-18 03:09:25.331769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:21.865 [2024-11-18 03:09:25.331800] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:21.865 [2024-11-18 03:09:25.331824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:21.865 [2024-11-18 03:09:25.331851] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:21.865 [2024-11-18 03:09:25.331874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:21.865 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.865 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:21.865 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.865 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.865 [2024-11-18 03:09:25.352833] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:21.865 BaseBdev1 00:09:21.865 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.865 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:21.865 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:21.865 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:21.865 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:21.865 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:21.865 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:21.865 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:21.865 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.865 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.865 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.865 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:21.865 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.865 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.865 [ 00:09:21.865 { 00:09:21.865 "name": "BaseBdev1", 00:09:21.865 "aliases": [ 00:09:21.865 "86d27502-dd78-417b-aa3b-217f08c1beb4" 00:09:21.865 ], 00:09:21.865 "product_name": "Malloc disk", 00:09:21.865 "block_size": 512, 00:09:21.865 "num_blocks": 65536, 00:09:21.865 "uuid": "86d27502-dd78-417b-aa3b-217f08c1beb4", 00:09:21.865 "assigned_rate_limits": { 00:09:21.865 "rw_ios_per_sec": 0, 00:09:21.865 "rw_mbytes_per_sec": 0, 00:09:21.865 "r_mbytes_per_sec": 0, 00:09:21.865 "w_mbytes_per_sec": 0 00:09:21.865 }, 00:09:21.865 "claimed": true, 00:09:21.865 "claim_type": "exclusive_write", 00:09:21.865 "zoned": false, 00:09:21.865 "supported_io_types": { 00:09:21.865 "read": true, 00:09:21.866 "write": true, 00:09:21.866 "unmap": true, 00:09:21.866 "flush": true, 00:09:21.866 "reset": true, 00:09:21.866 "nvme_admin": false, 00:09:21.866 "nvme_io": false, 00:09:21.866 "nvme_io_md": false, 00:09:21.866 "write_zeroes": true, 00:09:21.866 "zcopy": true, 00:09:21.866 "get_zone_info": false, 00:09:21.866 "zone_management": false, 00:09:21.866 "zone_append": false, 00:09:21.866 "compare": false, 00:09:21.866 "compare_and_write": false, 00:09:21.866 "abort": true, 00:09:21.866 "seek_hole": false, 00:09:21.866 "seek_data": false, 00:09:21.866 "copy": true, 00:09:21.866 "nvme_iov_md": false 00:09:21.866 }, 00:09:21.866 "memory_domains": [ 00:09:21.866 { 00:09:21.866 "dma_device_id": "system", 00:09:21.866 "dma_device_type": 1 00:09:21.866 }, 00:09:21.866 { 00:09:21.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.866 "dma_device_type": 2 00:09:21.866 } 00:09:21.866 ], 00:09:21.866 "driver_specific": {} 00:09:21.866 } 00:09:21.866 ] 00:09:21.866 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.866 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:21.866 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:21.866 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.866 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.866 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:21.866 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:21.866 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.866 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.866 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.866 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.866 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.866 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.866 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.866 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.866 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.866 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.866 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.866 "name": "Existed_Raid", 00:09:21.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.866 "strip_size_kb": 0, 00:09:21.866 "state": "configuring", 00:09:21.866 "raid_level": "raid1", 00:09:21.866 "superblock": false, 00:09:21.866 "num_base_bdevs": 3, 00:09:21.866 "num_base_bdevs_discovered": 1, 00:09:21.866 "num_base_bdevs_operational": 3, 00:09:21.866 "base_bdevs_list": [ 00:09:21.866 { 00:09:21.866 "name": "BaseBdev1", 00:09:21.866 "uuid": "86d27502-dd78-417b-aa3b-217f08c1beb4", 00:09:21.866 "is_configured": true, 00:09:21.866 "data_offset": 0, 00:09:21.866 "data_size": 65536 00:09:21.866 }, 00:09:21.866 { 00:09:21.866 "name": "BaseBdev2", 00:09:21.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.866 "is_configured": false, 00:09:21.866 "data_offset": 0, 00:09:21.866 "data_size": 0 00:09:21.866 }, 00:09:21.866 { 00:09:21.866 "name": "BaseBdev3", 00:09:21.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.866 "is_configured": false, 00:09:21.866 "data_offset": 0, 00:09:21.866 "data_size": 0 00:09:21.866 } 00:09:21.866 ] 00:09:21.866 }' 00:09:21.866 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.866 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.436 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:22.436 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.436 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.436 [2024-11-18 03:09:25.832115] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:22.436 [2024-11-18 03:09:25.832237] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:22.436 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.436 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:22.436 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.436 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.436 [2024-11-18 03:09:25.840127] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.436 [2024-11-18 03:09:25.842309] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:22.436 [2024-11-18 03:09:25.842393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:22.436 [2024-11-18 03:09:25.842445] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:22.436 [2024-11-18 03:09:25.842486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:22.436 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.436 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:22.436 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:22.436 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:22.436 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.436 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.436 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.436 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.436 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.436 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.436 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.436 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.436 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.436 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.436 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.436 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.436 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.436 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.436 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.436 "name": "Existed_Raid", 00:09:22.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.436 "strip_size_kb": 0, 00:09:22.436 "state": "configuring", 00:09:22.436 "raid_level": "raid1", 00:09:22.436 "superblock": false, 00:09:22.436 "num_base_bdevs": 3, 00:09:22.436 "num_base_bdevs_discovered": 1, 00:09:22.436 "num_base_bdevs_operational": 3, 00:09:22.436 "base_bdevs_list": [ 00:09:22.436 { 00:09:22.436 "name": "BaseBdev1", 00:09:22.436 "uuid": "86d27502-dd78-417b-aa3b-217f08c1beb4", 00:09:22.436 "is_configured": true, 00:09:22.436 "data_offset": 0, 00:09:22.436 "data_size": 65536 00:09:22.436 }, 00:09:22.436 { 00:09:22.436 "name": "BaseBdev2", 00:09:22.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.436 "is_configured": false, 00:09:22.436 "data_offset": 0, 00:09:22.436 "data_size": 0 00:09:22.436 }, 00:09:22.436 { 00:09:22.436 "name": "BaseBdev3", 00:09:22.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.436 "is_configured": false, 00:09:22.436 "data_offset": 0, 00:09:22.436 "data_size": 0 00:09:22.436 } 00:09:22.436 ] 00:09:22.436 }' 00:09:22.436 03:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.437 03:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.007 [2024-11-18 03:09:26.319688] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:23.007 BaseBdev2 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.007 [ 00:09:23.007 { 00:09:23.007 "name": "BaseBdev2", 00:09:23.007 "aliases": [ 00:09:23.007 "641a6884-04d7-4d25-8f4b-928afb7a0577" 00:09:23.007 ], 00:09:23.007 "product_name": "Malloc disk", 00:09:23.007 "block_size": 512, 00:09:23.007 "num_blocks": 65536, 00:09:23.007 "uuid": "641a6884-04d7-4d25-8f4b-928afb7a0577", 00:09:23.007 "assigned_rate_limits": { 00:09:23.007 "rw_ios_per_sec": 0, 00:09:23.007 "rw_mbytes_per_sec": 0, 00:09:23.007 "r_mbytes_per_sec": 0, 00:09:23.007 "w_mbytes_per_sec": 0 00:09:23.007 }, 00:09:23.007 "claimed": true, 00:09:23.007 "claim_type": "exclusive_write", 00:09:23.007 "zoned": false, 00:09:23.007 "supported_io_types": { 00:09:23.007 "read": true, 00:09:23.007 "write": true, 00:09:23.007 "unmap": true, 00:09:23.007 "flush": true, 00:09:23.007 "reset": true, 00:09:23.007 "nvme_admin": false, 00:09:23.007 "nvme_io": false, 00:09:23.007 "nvme_io_md": false, 00:09:23.007 "write_zeroes": true, 00:09:23.007 "zcopy": true, 00:09:23.007 "get_zone_info": false, 00:09:23.007 "zone_management": false, 00:09:23.007 "zone_append": false, 00:09:23.007 "compare": false, 00:09:23.007 "compare_and_write": false, 00:09:23.007 "abort": true, 00:09:23.007 "seek_hole": false, 00:09:23.007 "seek_data": false, 00:09:23.007 "copy": true, 00:09:23.007 "nvme_iov_md": false 00:09:23.007 }, 00:09:23.007 "memory_domains": [ 00:09:23.007 { 00:09:23.007 "dma_device_id": "system", 00:09:23.007 "dma_device_type": 1 00:09:23.007 }, 00:09:23.007 { 00:09:23.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.007 "dma_device_type": 2 00:09:23.007 } 00:09:23.007 ], 00:09:23.007 "driver_specific": {} 00:09:23.007 } 00:09:23.007 ] 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.007 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.008 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.008 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.008 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.008 "name": "Existed_Raid", 00:09:23.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.008 "strip_size_kb": 0, 00:09:23.008 "state": "configuring", 00:09:23.008 "raid_level": "raid1", 00:09:23.008 "superblock": false, 00:09:23.008 "num_base_bdevs": 3, 00:09:23.008 "num_base_bdevs_discovered": 2, 00:09:23.008 "num_base_bdevs_operational": 3, 00:09:23.008 "base_bdevs_list": [ 00:09:23.008 { 00:09:23.008 "name": "BaseBdev1", 00:09:23.008 "uuid": "86d27502-dd78-417b-aa3b-217f08c1beb4", 00:09:23.008 "is_configured": true, 00:09:23.008 "data_offset": 0, 00:09:23.008 "data_size": 65536 00:09:23.008 }, 00:09:23.008 { 00:09:23.008 "name": "BaseBdev2", 00:09:23.008 "uuid": "641a6884-04d7-4d25-8f4b-928afb7a0577", 00:09:23.008 "is_configured": true, 00:09:23.008 "data_offset": 0, 00:09:23.008 "data_size": 65536 00:09:23.008 }, 00:09:23.008 { 00:09:23.008 "name": "BaseBdev3", 00:09:23.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.008 "is_configured": false, 00:09:23.008 "data_offset": 0, 00:09:23.008 "data_size": 0 00:09:23.008 } 00:09:23.008 ] 00:09:23.008 }' 00:09:23.008 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.008 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.268 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:23.268 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.268 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.268 [2024-11-18 03:09:26.834114] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:23.268 [2024-11-18 03:09:26.834162] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:23.268 [2024-11-18 03:09:26.834172] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:23.268 [2024-11-18 03:09:26.834452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:23.268 [2024-11-18 03:09:26.834594] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:23.268 [2024-11-18 03:09:26.834604] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:23.268 [2024-11-18 03:09:26.834810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:23.268 BaseBdev3 00:09:23.268 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.268 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:23.268 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:23.268 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:23.268 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:23.268 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:23.268 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:23.268 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:23.268 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.268 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.529 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.529 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:23.529 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.529 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.529 [ 00:09:23.529 { 00:09:23.529 "name": "BaseBdev3", 00:09:23.529 "aliases": [ 00:09:23.529 "66c8243e-ba5f-44b8-bb2e-1879d7d455bc" 00:09:23.529 ], 00:09:23.529 "product_name": "Malloc disk", 00:09:23.529 "block_size": 512, 00:09:23.529 "num_blocks": 65536, 00:09:23.529 "uuid": "66c8243e-ba5f-44b8-bb2e-1879d7d455bc", 00:09:23.529 "assigned_rate_limits": { 00:09:23.529 "rw_ios_per_sec": 0, 00:09:23.529 "rw_mbytes_per_sec": 0, 00:09:23.529 "r_mbytes_per_sec": 0, 00:09:23.529 "w_mbytes_per_sec": 0 00:09:23.529 }, 00:09:23.529 "claimed": true, 00:09:23.529 "claim_type": "exclusive_write", 00:09:23.529 "zoned": false, 00:09:23.529 "supported_io_types": { 00:09:23.529 "read": true, 00:09:23.529 "write": true, 00:09:23.529 "unmap": true, 00:09:23.529 "flush": true, 00:09:23.529 "reset": true, 00:09:23.529 "nvme_admin": false, 00:09:23.529 "nvme_io": false, 00:09:23.529 "nvme_io_md": false, 00:09:23.529 "write_zeroes": true, 00:09:23.529 "zcopy": true, 00:09:23.529 "get_zone_info": false, 00:09:23.529 "zone_management": false, 00:09:23.529 "zone_append": false, 00:09:23.529 "compare": false, 00:09:23.529 "compare_and_write": false, 00:09:23.529 "abort": true, 00:09:23.529 "seek_hole": false, 00:09:23.529 "seek_data": false, 00:09:23.529 "copy": true, 00:09:23.529 "nvme_iov_md": false 00:09:23.529 }, 00:09:23.529 "memory_domains": [ 00:09:23.529 { 00:09:23.529 "dma_device_id": "system", 00:09:23.529 "dma_device_type": 1 00:09:23.529 }, 00:09:23.529 { 00:09:23.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.529 "dma_device_type": 2 00:09:23.529 } 00:09:23.529 ], 00:09:23.529 "driver_specific": {} 00:09:23.529 } 00:09:23.529 ] 00:09:23.529 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.529 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:23.529 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:23.529 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:23.529 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:23.529 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.529 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.529 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.529 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.529 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.529 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.529 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.529 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.529 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.529 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.529 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.529 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.529 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.529 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.529 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.529 "name": "Existed_Raid", 00:09:23.529 "uuid": "9d22f898-5758-41fb-adca-f5cc14f33706", 00:09:23.529 "strip_size_kb": 0, 00:09:23.529 "state": "online", 00:09:23.529 "raid_level": "raid1", 00:09:23.529 "superblock": false, 00:09:23.529 "num_base_bdevs": 3, 00:09:23.529 "num_base_bdevs_discovered": 3, 00:09:23.529 "num_base_bdevs_operational": 3, 00:09:23.529 "base_bdevs_list": [ 00:09:23.529 { 00:09:23.529 "name": "BaseBdev1", 00:09:23.529 "uuid": "86d27502-dd78-417b-aa3b-217f08c1beb4", 00:09:23.529 "is_configured": true, 00:09:23.529 "data_offset": 0, 00:09:23.529 "data_size": 65536 00:09:23.529 }, 00:09:23.529 { 00:09:23.529 "name": "BaseBdev2", 00:09:23.529 "uuid": "641a6884-04d7-4d25-8f4b-928afb7a0577", 00:09:23.529 "is_configured": true, 00:09:23.529 "data_offset": 0, 00:09:23.529 "data_size": 65536 00:09:23.529 }, 00:09:23.529 { 00:09:23.529 "name": "BaseBdev3", 00:09:23.529 "uuid": "66c8243e-ba5f-44b8-bb2e-1879d7d455bc", 00:09:23.529 "is_configured": true, 00:09:23.529 "data_offset": 0, 00:09:23.529 "data_size": 65536 00:09:23.529 } 00:09:23.529 ] 00:09:23.529 }' 00:09:23.529 03:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.529 03:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.790 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:23.790 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:23.790 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:23.790 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:23.790 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:23.790 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:23.790 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:23.790 03:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.790 03:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.790 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:23.790 [2024-11-18 03:09:27.329663] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:23.790 03:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.050 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:24.050 "name": "Existed_Raid", 00:09:24.050 "aliases": [ 00:09:24.050 "9d22f898-5758-41fb-adca-f5cc14f33706" 00:09:24.050 ], 00:09:24.050 "product_name": "Raid Volume", 00:09:24.050 "block_size": 512, 00:09:24.050 "num_blocks": 65536, 00:09:24.050 "uuid": "9d22f898-5758-41fb-adca-f5cc14f33706", 00:09:24.050 "assigned_rate_limits": { 00:09:24.050 "rw_ios_per_sec": 0, 00:09:24.050 "rw_mbytes_per_sec": 0, 00:09:24.050 "r_mbytes_per_sec": 0, 00:09:24.050 "w_mbytes_per_sec": 0 00:09:24.050 }, 00:09:24.050 "claimed": false, 00:09:24.050 "zoned": false, 00:09:24.050 "supported_io_types": { 00:09:24.050 "read": true, 00:09:24.050 "write": true, 00:09:24.050 "unmap": false, 00:09:24.050 "flush": false, 00:09:24.050 "reset": true, 00:09:24.050 "nvme_admin": false, 00:09:24.050 "nvme_io": false, 00:09:24.050 "nvme_io_md": false, 00:09:24.050 "write_zeroes": true, 00:09:24.050 "zcopy": false, 00:09:24.050 "get_zone_info": false, 00:09:24.050 "zone_management": false, 00:09:24.050 "zone_append": false, 00:09:24.050 "compare": false, 00:09:24.050 "compare_and_write": false, 00:09:24.050 "abort": false, 00:09:24.050 "seek_hole": false, 00:09:24.050 "seek_data": false, 00:09:24.050 "copy": false, 00:09:24.050 "nvme_iov_md": false 00:09:24.050 }, 00:09:24.050 "memory_domains": [ 00:09:24.050 { 00:09:24.050 "dma_device_id": "system", 00:09:24.050 "dma_device_type": 1 00:09:24.051 }, 00:09:24.051 { 00:09:24.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.051 "dma_device_type": 2 00:09:24.051 }, 00:09:24.051 { 00:09:24.051 "dma_device_id": "system", 00:09:24.051 "dma_device_type": 1 00:09:24.051 }, 00:09:24.051 { 00:09:24.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.051 "dma_device_type": 2 00:09:24.051 }, 00:09:24.051 { 00:09:24.051 "dma_device_id": "system", 00:09:24.051 "dma_device_type": 1 00:09:24.051 }, 00:09:24.051 { 00:09:24.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.051 "dma_device_type": 2 00:09:24.051 } 00:09:24.051 ], 00:09:24.051 "driver_specific": { 00:09:24.051 "raid": { 00:09:24.051 "uuid": "9d22f898-5758-41fb-adca-f5cc14f33706", 00:09:24.051 "strip_size_kb": 0, 00:09:24.051 "state": "online", 00:09:24.051 "raid_level": "raid1", 00:09:24.051 "superblock": false, 00:09:24.051 "num_base_bdevs": 3, 00:09:24.051 "num_base_bdevs_discovered": 3, 00:09:24.051 "num_base_bdevs_operational": 3, 00:09:24.051 "base_bdevs_list": [ 00:09:24.051 { 00:09:24.051 "name": "BaseBdev1", 00:09:24.051 "uuid": "86d27502-dd78-417b-aa3b-217f08c1beb4", 00:09:24.051 "is_configured": true, 00:09:24.051 "data_offset": 0, 00:09:24.051 "data_size": 65536 00:09:24.051 }, 00:09:24.051 { 00:09:24.051 "name": "BaseBdev2", 00:09:24.051 "uuid": "641a6884-04d7-4d25-8f4b-928afb7a0577", 00:09:24.051 "is_configured": true, 00:09:24.051 "data_offset": 0, 00:09:24.051 "data_size": 65536 00:09:24.051 }, 00:09:24.051 { 00:09:24.051 "name": "BaseBdev3", 00:09:24.051 "uuid": "66c8243e-ba5f-44b8-bb2e-1879d7d455bc", 00:09:24.051 "is_configured": true, 00:09:24.051 "data_offset": 0, 00:09:24.051 "data_size": 65536 00:09:24.051 } 00:09:24.051 ] 00:09:24.051 } 00:09:24.051 } 00:09:24.051 }' 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:24.051 BaseBdev2 00:09:24.051 BaseBdev3' 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.051 [2024-11-18 03:09:27.600942] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.051 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.311 03:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.311 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.311 "name": "Existed_Raid", 00:09:24.311 "uuid": "9d22f898-5758-41fb-adca-f5cc14f33706", 00:09:24.311 "strip_size_kb": 0, 00:09:24.311 "state": "online", 00:09:24.311 "raid_level": "raid1", 00:09:24.311 "superblock": false, 00:09:24.311 "num_base_bdevs": 3, 00:09:24.311 "num_base_bdevs_discovered": 2, 00:09:24.311 "num_base_bdevs_operational": 2, 00:09:24.311 "base_bdevs_list": [ 00:09:24.311 { 00:09:24.311 "name": null, 00:09:24.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.311 "is_configured": false, 00:09:24.311 "data_offset": 0, 00:09:24.311 "data_size": 65536 00:09:24.311 }, 00:09:24.311 { 00:09:24.311 "name": "BaseBdev2", 00:09:24.311 "uuid": "641a6884-04d7-4d25-8f4b-928afb7a0577", 00:09:24.311 "is_configured": true, 00:09:24.311 "data_offset": 0, 00:09:24.311 "data_size": 65536 00:09:24.311 }, 00:09:24.311 { 00:09:24.311 "name": "BaseBdev3", 00:09:24.311 "uuid": "66c8243e-ba5f-44b8-bb2e-1879d7d455bc", 00:09:24.311 "is_configured": true, 00:09:24.311 "data_offset": 0, 00:09:24.311 "data_size": 65536 00:09:24.311 } 00:09:24.311 ] 00:09:24.311 }' 00:09:24.311 03:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.311 03:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.571 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:24.571 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:24.571 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:24.571 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.571 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.571 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.572 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.572 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:24.572 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:24.572 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:24.572 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.572 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.572 [2024-11-18 03:09:28.091899] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:24.572 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.572 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:24.572 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:24.572 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.572 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:24.572 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.572 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.572 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.833 [2024-11-18 03:09:28.159350] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:24.833 [2024-11-18 03:09:28.159452] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:24.833 [2024-11-18 03:09:28.171765] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:24.833 [2024-11-18 03:09:28.171818] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:24.833 [2024-11-18 03:09:28.171834] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.833 BaseBdev2 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.833 [ 00:09:24.833 { 00:09:24.833 "name": "BaseBdev2", 00:09:24.833 "aliases": [ 00:09:24.833 "e98c0555-1571-45ec-801b-06942b709e28" 00:09:24.833 ], 00:09:24.833 "product_name": "Malloc disk", 00:09:24.833 "block_size": 512, 00:09:24.833 "num_blocks": 65536, 00:09:24.833 "uuid": "e98c0555-1571-45ec-801b-06942b709e28", 00:09:24.833 "assigned_rate_limits": { 00:09:24.833 "rw_ios_per_sec": 0, 00:09:24.833 "rw_mbytes_per_sec": 0, 00:09:24.833 "r_mbytes_per_sec": 0, 00:09:24.833 "w_mbytes_per_sec": 0 00:09:24.833 }, 00:09:24.833 "claimed": false, 00:09:24.833 "zoned": false, 00:09:24.833 "supported_io_types": { 00:09:24.833 "read": true, 00:09:24.833 "write": true, 00:09:24.833 "unmap": true, 00:09:24.833 "flush": true, 00:09:24.833 "reset": true, 00:09:24.833 "nvme_admin": false, 00:09:24.833 "nvme_io": false, 00:09:24.833 "nvme_io_md": false, 00:09:24.833 "write_zeroes": true, 00:09:24.833 "zcopy": true, 00:09:24.833 "get_zone_info": false, 00:09:24.833 "zone_management": false, 00:09:24.833 "zone_append": false, 00:09:24.833 "compare": false, 00:09:24.833 "compare_and_write": false, 00:09:24.833 "abort": true, 00:09:24.833 "seek_hole": false, 00:09:24.833 "seek_data": false, 00:09:24.833 "copy": true, 00:09:24.833 "nvme_iov_md": false 00:09:24.833 }, 00:09:24.833 "memory_domains": [ 00:09:24.833 { 00:09:24.833 "dma_device_id": "system", 00:09:24.833 "dma_device_type": 1 00:09:24.833 }, 00:09:24.833 { 00:09:24.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.833 "dma_device_type": 2 00:09:24.833 } 00:09:24.833 ], 00:09:24.833 "driver_specific": {} 00:09:24.833 } 00:09:24.833 ] 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.833 BaseBdev3 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.833 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:24.834 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.834 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.834 [ 00:09:24.834 { 00:09:24.834 "name": "BaseBdev3", 00:09:24.834 "aliases": [ 00:09:24.834 "066e7d2f-b127-45a2-a3e6-9ea617c3562b" 00:09:24.834 ], 00:09:24.834 "product_name": "Malloc disk", 00:09:24.834 "block_size": 512, 00:09:24.834 "num_blocks": 65536, 00:09:24.834 "uuid": "066e7d2f-b127-45a2-a3e6-9ea617c3562b", 00:09:24.834 "assigned_rate_limits": { 00:09:24.834 "rw_ios_per_sec": 0, 00:09:24.834 "rw_mbytes_per_sec": 0, 00:09:24.834 "r_mbytes_per_sec": 0, 00:09:24.834 "w_mbytes_per_sec": 0 00:09:24.834 }, 00:09:24.834 "claimed": false, 00:09:24.834 "zoned": false, 00:09:24.834 "supported_io_types": { 00:09:24.834 "read": true, 00:09:24.834 "write": true, 00:09:24.834 "unmap": true, 00:09:24.834 "flush": true, 00:09:24.834 "reset": true, 00:09:24.834 "nvme_admin": false, 00:09:24.834 "nvme_io": false, 00:09:24.834 "nvme_io_md": false, 00:09:24.834 "write_zeroes": true, 00:09:24.834 "zcopy": true, 00:09:24.834 "get_zone_info": false, 00:09:24.834 "zone_management": false, 00:09:24.834 "zone_append": false, 00:09:24.834 "compare": false, 00:09:24.834 "compare_and_write": false, 00:09:24.834 "abort": true, 00:09:24.834 "seek_hole": false, 00:09:24.834 "seek_data": false, 00:09:24.834 "copy": true, 00:09:24.834 "nvme_iov_md": false 00:09:24.834 }, 00:09:24.834 "memory_domains": [ 00:09:24.834 { 00:09:24.834 "dma_device_id": "system", 00:09:24.834 "dma_device_type": 1 00:09:24.834 }, 00:09:24.834 { 00:09:24.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.834 "dma_device_type": 2 00:09:24.834 } 00:09:24.834 ], 00:09:24.834 "driver_specific": {} 00:09:24.834 } 00:09:24.834 ] 00:09:24.834 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.834 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:24.834 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:24.834 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:24.834 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:24.834 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.834 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.834 [2024-11-18 03:09:28.337429] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:24.834 [2024-11-18 03:09:28.337546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:24.834 [2024-11-18 03:09:28.337594] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:24.834 [2024-11-18 03:09:28.339581] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:24.834 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.834 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:24.834 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.834 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.834 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.834 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.834 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.834 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.834 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.834 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.834 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.834 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.834 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.834 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.834 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.834 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.834 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.834 "name": "Existed_Raid", 00:09:24.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.834 "strip_size_kb": 0, 00:09:24.834 "state": "configuring", 00:09:24.834 "raid_level": "raid1", 00:09:24.834 "superblock": false, 00:09:24.834 "num_base_bdevs": 3, 00:09:24.834 "num_base_bdevs_discovered": 2, 00:09:24.834 "num_base_bdevs_operational": 3, 00:09:24.834 "base_bdevs_list": [ 00:09:24.834 { 00:09:24.834 "name": "BaseBdev1", 00:09:24.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.834 "is_configured": false, 00:09:24.834 "data_offset": 0, 00:09:24.834 "data_size": 0 00:09:24.834 }, 00:09:24.834 { 00:09:24.834 "name": "BaseBdev2", 00:09:24.834 "uuid": "e98c0555-1571-45ec-801b-06942b709e28", 00:09:24.834 "is_configured": true, 00:09:24.834 "data_offset": 0, 00:09:24.834 "data_size": 65536 00:09:24.834 }, 00:09:24.834 { 00:09:24.834 "name": "BaseBdev3", 00:09:24.834 "uuid": "066e7d2f-b127-45a2-a3e6-9ea617c3562b", 00:09:24.834 "is_configured": true, 00:09:24.834 "data_offset": 0, 00:09:24.834 "data_size": 65536 00:09:24.834 } 00:09:24.834 ] 00:09:24.834 }' 00:09:24.834 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.834 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.405 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:25.405 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.405 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.405 [2024-11-18 03:09:28.740791] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:25.405 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.405 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:25.405 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.405 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.405 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.405 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.405 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.405 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.405 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.405 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.405 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.405 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.405 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.405 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.405 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.405 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.405 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.405 "name": "Existed_Raid", 00:09:25.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.405 "strip_size_kb": 0, 00:09:25.405 "state": "configuring", 00:09:25.405 "raid_level": "raid1", 00:09:25.405 "superblock": false, 00:09:25.405 "num_base_bdevs": 3, 00:09:25.405 "num_base_bdevs_discovered": 1, 00:09:25.405 "num_base_bdevs_operational": 3, 00:09:25.405 "base_bdevs_list": [ 00:09:25.405 { 00:09:25.405 "name": "BaseBdev1", 00:09:25.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.405 "is_configured": false, 00:09:25.405 "data_offset": 0, 00:09:25.405 "data_size": 0 00:09:25.405 }, 00:09:25.405 { 00:09:25.405 "name": null, 00:09:25.405 "uuid": "e98c0555-1571-45ec-801b-06942b709e28", 00:09:25.405 "is_configured": false, 00:09:25.405 "data_offset": 0, 00:09:25.405 "data_size": 65536 00:09:25.405 }, 00:09:25.405 { 00:09:25.405 "name": "BaseBdev3", 00:09:25.405 "uuid": "066e7d2f-b127-45a2-a3e6-9ea617c3562b", 00:09:25.405 "is_configured": true, 00:09:25.405 "data_offset": 0, 00:09:25.405 "data_size": 65536 00:09:25.405 } 00:09:25.405 ] 00:09:25.405 }' 00:09:25.405 03:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.405 03:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.666 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.666 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:25.666 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.666 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.666 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.666 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:25.666 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:25.666 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.666 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.925 [2024-11-18 03:09:29.247374] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:25.925 BaseBdev1 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.925 [ 00:09:25.925 { 00:09:25.925 "name": "BaseBdev1", 00:09:25.925 "aliases": [ 00:09:25.925 "b4144f7e-fa6c-4c8d-a12e-8c1da905258f" 00:09:25.925 ], 00:09:25.925 "product_name": "Malloc disk", 00:09:25.925 "block_size": 512, 00:09:25.925 "num_blocks": 65536, 00:09:25.925 "uuid": "b4144f7e-fa6c-4c8d-a12e-8c1da905258f", 00:09:25.925 "assigned_rate_limits": { 00:09:25.925 "rw_ios_per_sec": 0, 00:09:25.925 "rw_mbytes_per_sec": 0, 00:09:25.925 "r_mbytes_per_sec": 0, 00:09:25.925 "w_mbytes_per_sec": 0 00:09:25.925 }, 00:09:25.925 "claimed": true, 00:09:25.925 "claim_type": "exclusive_write", 00:09:25.925 "zoned": false, 00:09:25.925 "supported_io_types": { 00:09:25.925 "read": true, 00:09:25.925 "write": true, 00:09:25.925 "unmap": true, 00:09:25.925 "flush": true, 00:09:25.925 "reset": true, 00:09:25.925 "nvme_admin": false, 00:09:25.925 "nvme_io": false, 00:09:25.925 "nvme_io_md": false, 00:09:25.925 "write_zeroes": true, 00:09:25.925 "zcopy": true, 00:09:25.925 "get_zone_info": false, 00:09:25.925 "zone_management": false, 00:09:25.925 "zone_append": false, 00:09:25.925 "compare": false, 00:09:25.925 "compare_and_write": false, 00:09:25.925 "abort": true, 00:09:25.925 "seek_hole": false, 00:09:25.925 "seek_data": false, 00:09:25.925 "copy": true, 00:09:25.925 "nvme_iov_md": false 00:09:25.925 }, 00:09:25.925 "memory_domains": [ 00:09:25.925 { 00:09:25.925 "dma_device_id": "system", 00:09:25.925 "dma_device_type": 1 00:09:25.925 }, 00:09:25.925 { 00:09:25.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.925 "dma_device_type": 2 00:09:25.925 } 00:09:25.925 ], 00:09:25.925 "driver_specific": {} 00:09:25.925 } 00:09:25.925 ] 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.925 "name": "Existed_Raid", 00:09:25.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.925 "strip_size_kb": 0, 00:09:25.925 "state": "configuring", 00:09:25.925 "raid_level": "raid1", 00:09:25.925 "superblock": false, 00:09:25.925 "num_base_bdevs": 3, 00:09:25.925 "num_base_bdevs_discovered": 2, 00:09:25.925 "num_base_bdevs_operational": 3, 00:09:25.925 "base_bdevs_list": [ 00:09:25.925 { 00:09:25.925 "name": "BaseBdev1", 00:09:25.925 "uuid": "b4144f7e-fa6c-4c8d-a12e-8c1da905258f", 00:09:25.925 "is_configured": true, 00:09:25.925 "data_offset": 0, 00:09:25.925 "data_size": 65536 00:09:25.925 }, 00:09:25.925 { 00:09:25.925 "name": null, 00:09:25.925 "uuid": "e98c0555-1571-45ec-801b-06942b709e28", 00:09:25.925 "is_configured": false, 00:09:25.925 "data_offset": 0, 00:09:25.925 "data_size": 65536 00:09:25.925 }, 00:09:25.925 { 00:09:25.925 "name": "BaseBdev3", 00:09:25.925 "uuid": "066e7d2f-b127-45a2-a3e6-9ea617c3562b", 00:09:25.925 "is_configured": true, 00:09:25.925 "data_offset": 0, 00:09:25.925 "data_size": 65536 00:09:25.925 } 00:09:25.925 ] 00:09:25.925 }' 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.925 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.184 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:26.184 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.184 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.184 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.184 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.444 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:26.444 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:26.444 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.444 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.444 [2024-11-18 03:09:29.782541] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:26.444 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.444 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:26.444 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.444 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.444 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.444 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.444 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.444 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.444 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.444 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.444 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.444 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.444 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.444 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.444 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.444 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.444 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.444 "name": "Existed_Raid", 00:09:26.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.444 "strip_size_kb": 0, 00:09:26.444 "state": "configuring", 00:09:26.444 "raid_level": "raid1", 00:09:26.444 "superblock": false, 00:09:26.444 "num_base_bdevs": 3, 00:09:26.444 "num_base_bdevs_discovered": 1, 00:09:26.444 "num_base_bdevs_operational": 3, 00:09:26.444 "base_bdevs_list": [ 00:09:26.444 { 00:09:26.444 "name": "BaseBdev1", 00:09:26.444 "uuid": "b4144f7e-fa6c-4c8d-a12e-8c1da905258f", 00:09:26.444 "is_configured": true, 00:09:26.444 "data_offset": 0, 00:09:26.444 "data_size": 65536 00:09:26.444 }, 00:09:26.444 { 00:09:26.444 "name": null, 00:09:26.444 "uuid": "e98c0555-1571-45ec-801b-06942b709e28", 00:09:26.444 "is_configured": false, 00:09:26.444 "data_offset": 0, 00:09:26.444 "data_size": 65536 00:09:26.444 }, 00:09:26.444 { 00:09:26.444 "name": null, 00:09:26.444 "uuid": "066e7d2f-b127-45a2-a3e6-9ea617c3562b", 00:09:26.444 "is_configured": false, 00:09:26.444 "data_offset": 0, 00:09:26.444 "data_size": 65536 00:09:26.444 } 00:09:26.444 ] 00:09:26.444 }' 00:09:26.444 03:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.444 03:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.704 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.704 03:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.704 03:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.704 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:26.704 03:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.704 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:26.704 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:26.704 03:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.704 03:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.704 [2024-11-18 03:09:30.221874] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:26.704 03:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.704 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:26.704 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.704 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.704 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.704 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.704 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.704 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.704 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.704 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.704 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.704 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.704 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.704 03:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.704 03:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.704 03:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.704 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.704 "name": "Existed_Raid", 00:09:26.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.704 "strip_size_kb": 0, 00:09:26.704 "state": "configuring", 00:09:26.704 "raid_level": "raid1", 00:09:26.704 "superblock": false, 00:09:26.704 "num_base_bdevs": 3, 00:09:26.704 "num_base_bdevs_discovered": 2, 00:09:26.704 "num_base_bdevs_operational": 3, 00:09:26.704 "base_bdevs_list": [ 00:09:26.704 { 00:09:26.704 "name": "BaseBdev1", 00:09:26.704 "uuid": "b4144f7e-fa6c-4c8d-a12e-8c1da905258f", 00:09:26.704 "is_configured": true, 00:09:26.704 "data_offset": 0, 00:09:26.704 "data_size": 65536 00:09:26.704 }, 00:09:26.704 { 00:09:26.704 "name": null, 00:09:26.704 "uuid": "e98c0555-1571-45ec-801b-06942b709e28", 00:09:26.704 "is_configured": false, 00:09:26.704 "data_offset": 0, 00:09:26.704 "data_size": 65536 00:09:26.704 }, 00:09:26.704 { 00:09:26.704 "name": "BaseBdev3", 00:09:26.704 "uuid": "066e7d2f-b127-45a2-a3e6-9ea617c3562b", 00:09:26.704 "is_configured": true, 00:09:26.704 "data_offset": 0, 00:09:26.705 "data_size": 65536 00:09:26.705 } 00:09:26.705 ] 00:09:26.705 }' 00:09:26.705 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.705 03:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.274 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.274 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:27.274 03:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.274 03:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.274 03:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.274 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:27.274 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:27.274 03:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.274 03:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.274 [2024-11-18 03:09:30.756943] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:27.274 03:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.274 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:27.274 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.274 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.274 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.274 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.274 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.274 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.274 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.274 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.274 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.274 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.274 03:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.274 03:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.274 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.274 03:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.274 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.274 "name": "Existed_Raid", 00:09:27.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.274 "strip_size_kb": 0, 00:09:27.274 "state": "configuring", 00:09:27.274 "raid_level": "raid1", 00:09:27.274 "superblock": false, 00:09:27.274 "num_base_bdevs": 3, 00:09:27.274 "num_base_bdevs_discovered": 1, 00:09:27.274 "num_base_bdevs_operational": 3, 00:09:27.274 "base_bdevs_list": [ 00:09:27.274 { 00:09:27.274 "name": null, 00:09:27.274 "uuid": "b4144f7e-fa6c-4c8d-a12e-8c1da905258f", 00:09:27.274 "is_configured": false, 00:09:27.274 "data_offset": 0, 00:09:27.274 "data_size": 65536 00:09:27.274 }, 00:09:27.274 { 00:09:27.274 "name": null, 00:09:27.274 "uuid": "e98c0555-1571-45ec-801b-06942b709e28", 00:09:27.274 "is_configured": false, 00:09:27.274 "data_offset": 0, 00:09:27.274 "data_size": 65536 00:09:27.274 }, 00:09:27.274 { 00:09:27.274 "name": "BaseBdev3", 00:09:27.274 "uuid": "066e7d2f-b127-45a2-a3e6-9ea617c3562b", 00:09:27.274 "is_configured": true, 00:09:27.274 "data_offset": 0, 00:09:27.274 "data_size": 65536 00:09:27.274 } 00:09:27.274 ] 00:09:27.274 }' 00:09:27.274 03:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.274 03:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.844 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.844 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:27.844 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.844 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.844 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.844 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:27.844 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:27.844 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.844 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.844 [2024-11-18 03:09:31.231119] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:27.844 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.844 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:27.844 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.844 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.844 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.844 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.844 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.844 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.844 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.844 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.844 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.844 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.844 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.844 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.844 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.844 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.844 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.844 "name": "Existed_Raid", 00:09:27.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.844 "strip_size_kb": 0, 00:09:27.844 "state": "configuring", 00:09:27.844 "raid_level": "raid1", 00:09:27.844 "superblock": false, 00:09:27.844 "num_base_bdevs": 3, 00:09:27.844 "num_base_bdevs_discovered": 2, 00:09:27.844 "num_base_bdevs_operational": 3, 00:09:27.844 "base_bdevs_list": [ 00:09:27.844 { 00:09:27.844 "name": null, 00:09:27.844 "uuid": "b4144f7e-fa6c-4c8d-a12e-8c1da905258f", 00:09:27.845 "is_configured": false, 00:09:27.845 "data_offset": 0, 00:09:27.845 "data_size": 65536 00:09:27.845 }, 00:09:27.845 { 00:09:27.845 "name": "BaseBdev2", 00:09:27.845 "uuid": "e98c0555-1571-45ec-801b-06942b709e28", 00:09:27.845 "is_configured": true, 00:09:27.845 "data_offset": 0, 00:09:27.845 "data_size": 65536 00:09:27.845 }, 00:09:27.845 { 00:09:27.845 "name": "BaseBdev3", 00:09:27.845 "uuid": "066e7d2f-b127-45a2-a3e6-9ea617c3562b", 00:09:27.845 "is_configured": true, 00:09:27.845 "data_offset": 0, 00:09:27.845 "data_size": 65536 00:09:27.845 } 00:09:27.845 ] 00:09:27.845 }' 00:09:27.845 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.845 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.104 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.104 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.104 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.104 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:28.364 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.364 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:28.364 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.364 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.364 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.364 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:28.364 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.364 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b4144f7e-fa6c-4c8d-a12e-8c1da905258f 00:09:28.364 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.364 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.364 [2024-11-18 03:09:31.773404] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:28.364 [2024-11-18 03:09:31.773454] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:28.364 [2024-11-18 03:09:31.773462] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:28.364 [2024-11-18 03:09:31.773718] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:28.364 [2024-11-18 03:09:31.773851] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:28.364 [2024-11-18 03:09:31.773865] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:28.364 [2024-11-18 03:09:31.774082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.364 NewBaseBdev 00:09:28.364 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.364 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:28.364 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:28.364 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:28.364 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:28.364 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:28.364 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:28.364 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:28.364 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.364 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.364 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.364 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:28.364 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.364 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.364 [ 00:09:28.364 { 00:09:28.364 "name": "NewBaseBdev", 00:09:28.364 "aliases": [ 00:09:28.364 "b4144f7e-fa6c-4c8d-a12e-8c1da905258f" 00:09:28.364 ], 00:09:28.364 "product_name": "Malloc disk", 00:09:28.364 "block_size": 512, 00:09:28.364 "num_blocks": 65536, 00:09:28.364 "uuid": "b4144f7e-fa6c-4c8d-a12e-8c1da905258f", 00:09:28.364 "assigned_rate_limits": { 00:09:28.364 "rw_ios_per_sec": 0, 00:09:28.364 "rw_mbytes_per_sec": 0, 00:09:28.364 "r_mbytes_per_sec": 0, 00:09:28.364 "w_mbytes_per_sec": 0 00:09:28.364 }, 00:09:28.364 "claimed": true, 00:09:28.364 "claim_type": "exclusive_write", 00:09:28.364 "zoned": false, 00:09:28.364 "supported_io_types": { 00:09:28.364 "read": true, 00:09:28.364 "write": true, 00:09:28.364 "unmap": true, 00:09:28.364 "flush": true, 00:09:28.364 "reset": true, 00:09:28.364 "nvme_admin": false, 00:09:28.364 "nvme_io": false, 00:09:28.364 "nvme_io_md": false, 00:09:28.364 "write_zeroes": true, 00:09:28.364 "zcopy": true, 00:09:28.364 "get_zone_info": false, 00:09:28.364 "zone_management": false, 00:09:28.364 "zone_append": false, 00:09:28.364 "compare": false, 00:09:28.364 "compare_and_write": false, 00:09:28.364 "abort": true, 00:09:28.364 "seek_hole": false, 00:09:28.364 "seek_data": false, 00:09:28.364 "copy": true, 00:09:28.364 "nvme_iov_md": false 00:09:28.364 }, 00:09:28.364 "memory_domains": [ 00:09:28.364 { 00:09:28.364 "dma_device_id": "system", 00:09:28.364 "dma_device_type": 1 00:09:28.364 }, 00:09:28.364 { 00:09:28.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.364 "dma_device_type": 2 00:09:28.364 } 00:09:28.364 ], 00:09:28.364 "driver_specific": {} 00:09:28.364 } 00:09:28.364 ] 00:09:28.364 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.364 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:28.365 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:28.365 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.365 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.365 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.365 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.365 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.365 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.365 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.365 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.365 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.365 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.365 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.365 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.365 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.365 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.365 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.365 "name": "Existed_Raid", 00:09:28.365 "uuid": "c1842e5c-c12e-4d40-929b-842a56539ef4", 00:09:28.365 "strip_size_kb": 0, 00:09:28.365 "state": "online", 00:09:28.365 "raid_level": "raid1", 00:09:28.365 "superblock": false, 00:09:28.365 "num_base_bdevs": 3, 00:09:28.365 "num_base_bdevs_discovered": 3, 00:09:28.365 "num_base_bdevs_operational": 3, 00:09:28.365 "base_bdevs_list": [ 00:09:28.365 { 00:09:28.365 "name": "NewBaseBdev", 00:09:28.365 "uuid": "b4144f7e-fa6c-4c8d-a12e-8c1da905258f", 00:09:28.365 "is_configured": true, 00:09:28.365 "data_offset": 0, 00:09:28.365 "data_size": 65536 00:09:28.365 }, 00:09:28.365 { 00:09:28.365 "name": "BaseBdev2", 00:09:28.365 "uuid": "e98c0555-1571-45ec-801b-06942b709e28", 00:09:28.365 "is_configured": true, 00:09:28.365 "data_offset": 0, 00:09:28.365 "data_size": 65536 00:09:28.365 }, 00:09:28.365 { 00:09:28.365 "name": "BaseBdev3", 00:09:28.365 "uuid": "066e7d2f-b127-45a2-a3e6-9ea617c3562b", 00:09:28.365 "is_configured": true, 00:09:28.365 "data_offset": 0, 00:09:28.365 "data_size": 65536 00:09:28.365 } 00:09:28.365 ] 00:09:28.365 }' 00:09:28.365 03:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.365 03:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.935 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:28.935 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:28.935 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:28.935 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:28.935 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:28.935 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:28.935 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:28.935 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:28.935 03:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.935 03:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.935 [2024-11-18 03:09:32.276981] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:28.935 03:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.935 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:28.935 "name": "Existed_Raid", 00:09:28.935 "aliases": [ 00:09:28.935 "c1842e5c-c12e-4d40-929b-842a56539ef4" 00:09:28.935 ], 00:09:28.935 "product_name": "Raid Volume", 00:09:28.935 "block_size": 512, 00:09:28.935 "num_blocks": 65536, 00:09:28.935 "uuid": "c1842e5c-c12e-4d40-929b-842a56539ef4", 00:09:28.935 "assigned_rate_limits": { 00:09:28.935 "rw_ios_per_sec": 0, 00:09:28.935 "rw_mbytes_per_sec": 0, 00:09:28.935 "r_mbytes_per_sec": 0, 00:09:28.935 "w_mbytes_per_sec": 0 00:09:28.935 }, 00:09:28.935 "claimed": false, 00:09:28.935 "zoned": false, 00:09:28.935 "supported_io_types": { 00:09:28.935 "read": true, 00:09:28.935 "write": true, 00:09:28.935 "unmap": false, 00:09:28.935 "flush": false, 00:09:28.935 "reset": true, 00:09:28.935 "nvme_admin": false, 00:09:28.935 "nvme_io": false, 00:09:28.935 "nvme_io_md": false, 00:09:28.935 "write_zeroes": true, 00:09:28.935 "zcopy": false, 00:09:28.935 "get_zone_info": false, 00:09:28.935 "zone_management": false, 00:09:28.935 "zone_append": false, 00:09:28.935 "compare": false, 00:09:28.935 "compare_and_write": false, 00:09:28.935 "abort": false, 00:09:28.935 "seek_hole": false, 00:09:28.935 "seek_data": false, 00:09:28.935 "copy": false, 00:09:28.935 "nvme_iov_md": false 00:09:28.935 }, 00:09:28.935 "memory_domains": [ 00:09:28.935 { 00:09:28.935 "dma_device_id": "system", 00:09:28.935 "dma_device_type": 1 00:09:28.935 }, 00:09:28.935 { 00:09:28.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.935 "dma_device_type": 2 00:09:28.935 }, 00:09:28.935 { 00:09:28.935 "dma_device_id": "system", 00:09:28.935 "dma_device_type": 1 00:09:28.935 }, 00:09:28.935 { 00:09:28.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.935 "dma_device_type": 2 00:09:28.935 }, 00:09:28.935 { 00:09:28.935 "dma_device_id": "system", 00:09:28.935 "dma_device_type": 1 00:09:28.935 }, 00:09:28.935 { 00:09:28.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.935 "dma_device_type": 2 00:09:28.935 } 00:09:28.935 ], 00:09:28.935 "driver_specific": { 00:09:28.935 "raid": { 00:09:28.935 "uuid": "c1842e5c-c12e-4d40-929b-842a56539ef4", 00:09:28.935 "strip_size_kb": 0, 00:09:28.935 "state": "online", 00:09:28.935 "raid_level": "raid1", 00:09:28.935 "superblock": false, 00:09:28.935 "num_base_bdevs": 3, 00:09:28.935 "num_base_bdevs_discovered": 3, 00:09:28.935 "num_base_bdevs_operational": 3, 00:09:28.935 "base_bdevs_list": [ 00:09:28.935 { 00:09:28.935 "name": "NewBaseBdev", 00:09:28.935 "uuid": "b4144f7e-fa6c-4c8d-a12e-8c1da905258f", 00:09:28.935 "is_configured": true, 00:09:28.935 "data_offset": 0, 00:09:28.935 "data_size": 65536 00:09:28.935 }, 00:09:28.935 { 00:09:28.935 "name": "BaseBdev2", 00:09:28.935 "uuid": "e98c0555-1571-45ec-801b-06942b709e28", 00:09:28.935 "is_configured": true, 00:09:28.935 "data_offset": 0, 00:09:28.935 "data_size": 65536 00:09:28.935 }, 00:09:28.935 { 00:09:28.935 "name": "BaseBdev3", 00:09:28.936 "uuid": "066e7d2f-b127-45a2-a3e6-9ea617c3562b", 00:09:28.936 "is_configured": true, 00:09:28.936 "data_offset": 0, 00:09:28.936 "data_size": 65536 00:09:28.936 } 00:09:28.936 ] 00:09:28.936 } 00:09:28.936 } 00:09:28.936 }' 00:09:28.936 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:28.936 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:28.936 BaseBdev2 00:09:28.936 BaseBdev3' 00:09:28.936 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.936 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:28.936 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:28.936 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:28.936 03:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.936 03:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.936 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.936 03:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.936 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:28.936 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:28.936 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:28.936 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:28.936 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.936 03:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.936 03:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.936 03:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.936 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:28.936 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:28.936 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:28.936 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:28.936 03:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.936 03:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.936 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.195 03:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.195 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.195 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.195 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:29.195 03:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.195 03:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.195 [2024-11-18 03:09:32.544164] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:29.195 [2024-11-18 03:09:32.544250] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.195 [2024-11-18 03:09:32.544333] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.195 [2024-11-18 03:09:32.544622] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:29.195 [2024-11-18 03:09:32.544634] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:29.195 03:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.195 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78606 00:09:29.195 03:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 78606 ']' 00:09:29.195 03:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 78606 00:09:29.195 03:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:29.195 03:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:29.195 03:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78606 00:09:29.195 03:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:29.195 03:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:29.195 03:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78606' 00:09:29.195 killing process with pid 78606 00:09:29.195 03:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 78606 00:09:29.195 [2024-11-18 03:09:32.589441] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:29.195 03:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 78606 00:09:29.195 [2024-11-18 03:09:32.621262] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:29.455 03:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:29.455 00:09:29.455 real 0m8.927s 00:09:29.455 user 0m15.244s 00:09:29.455 sys 0m1.827s 00:09:29.455 03:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.455 03:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.455 ************************************ 00:09:29.455 END TEST raid_state_function_test 00:09:29.455 ************************************ 00:09:29.455 03:09:32 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:29.455 03:09:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:29.455 03:09:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.455 03:09:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:29.455 ************************************ 00:09:29.455 START TEST raid_state_function_test_sb 00:09:29.455 ************************************ 00:09:29.455 03:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:09:29.455 03:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:29.455 03:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:29.455 03:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:29.455 03:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:29.455 03:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:29.455 03:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:29.455 03:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:29.455 03:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:29.455 03:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:29.455 03:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:29.455 03:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:29.455 03:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:29.455 03:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:29.455 03:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:29.455 03:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:29.455 03:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:29.455 03:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:29.455 03:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:29.455 03:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:29.456 03:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:29.456 03:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:29.456 03:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:29.456 03:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:29.456 03:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:29.456 03:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:29.456 03:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=79210 00:09:29.456 03:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:29.456 03:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79210' 00:09:29.456 Process raid pid: 79210 00:09:29.456 03:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 79210 00:09:29.456 03:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 79210 ']' 00:09:29.456 03:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.456 03:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:29.456 03:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.456 03:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:29.456 03:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.456 [2024-11-18 03:09:33.029781] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:29.456 [2024-11-18 03:09:33.030006] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.716 [2024-11-18 03:09:33.194513] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.716 [2024-11-18 03:09:33.244808] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.716 [2024-11-18 03:09:33.287534] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.716 [2024-11-18 03:09:33.287588] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.655 03:09:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:30.655 03:09:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:30.655 03:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:30.655 03:09:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.656 03:09:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.656 [2024-11-18 03:09:33.917735] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:30.656 [2024-11-18 03:09:33.917790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:30.656 [2024-11-18 03:09:33.917810] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:30.656 [2024-11-18 03:09:33.917820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:30.656 [2024-11-18 03:09:33.917827] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:30.656 [2024-11-18 03:09:33.917840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:30.656 03:09:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.656 03:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:30.656 03:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.656 03:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.656 03:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.656 03:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.656 03:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.656 03:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.656 03:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.656 03:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.656 03:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.656 03:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.656 03:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.656 03:09:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.656 03:09:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.656 03:09:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.656 03:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.656 "name": "Existed_Raid", 00:09:30.656 "uuid": "9dcbe4ca-52d9-4a29-b830-c5e1d25eb2d2", 00:09:30.656 "strip_size_kb": 0, 00:09:30.656 "state": "configuring", 00:09:30.656 "raid_level": "raid1", 00:09:30.656 "superblock": true, 00:09:30.656 "num_base_bdevs": 3, 00:09:30.656 "num_base_bdevs_discovered": 0, 00:09:30.656 "num_base_bdevs_operational": 3, 00:09:30.656 "base_bdevs_list": [ 00:09:30.656 { 00:09:30.656 "name": "BaseBdev1", 00:09:30.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.656 "is_configured": false, 00:09:30.656 "data_offset": 0, 00:09:30.656 "data_size": 0 00:09:30.656 }, 00:09:30.656 { 00:09:30.656 "name": "BaseBdev2", 00:09:30.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.656 "is_configured": false, 00:09:30.656 "data_offset": 0, 00:09:30.656 "data_size": 0 00:09:30.656 }, 00:09:30.656 { 00:09:30.656 "name": "BaseBdev3", 00:09:30.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.656 "is_configured": false, 00:09:30.656 "data_offset": 0, 00:09:30.656 "data_size": 0 00:09:30.656 } 00:09:30.656 ] 00:09:30.656 }' 00:09:30.656 03:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.656 03:09:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.916 [2024-11-18 03:09:34.368843] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:30.916 [2024-11-18 03:09:34.368944] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.916 [2024-11-18 03:09:34.380853] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:30.916 [2024-11-18 03:09:34.380973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:30.916 [2024-11-18 03:09:34.381004] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:30.916 [2024-11-18 03:09:34.381028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:30.916 [2024-11-18 03:09:34.381047] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:30.916 [2024-11-18 03:09:34.381068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.916 [2024-11-18 03:09:34.401858] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:30.916 BaseBdev1 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.916 [ 00:09:30.916 { 00:09:30.916 "name": "BaseBdev1", 00:09:30.916 "aliases": [ 00:09:30.916 "ab436a7e-5e6a-4f3e-802a-133984ace95f" 00:09:30.916 ], 00:09:30.916 "product_name": "Malloc disk", 00:09:30.916 "block_size": 512, 00:09:30.916 "num_blocks": 65536, 00:09:30.916 "uuid": "ab436a7e-5e6a-4f3e-802a-133984ace95f", 00:09:30.916 "assigned_rate_limits": { 00:09:30.916 "rw_ios_per_sec": 0, 00:09:30.916 "rw_mbytes_per_sec": 0, 00:09:30.916 "r_mbytes_per_sec": 0, 00:09:30.916 "w_mbytes_per_sec": 0 00:09:30.916 }, 00:09:30.916 "claimed": true, 00:09:30.916 "claim_type": "exclusive_write", 00:09:30.916 "zoned": false, 00:09:30.916 "supported_io_types": { 00:09:30.916 "read": true, 00:09:30.916 "write": true, 00:09:30.916 "unmap": true, 00:09:30.916 "flush": true, 00:09:30.916 "reset": true, 00:09:30.916 "nvme_admin": false, 00:09:30.916 "nvme_io": false, 00:09:30.916 "nvme_io_md": false, 00:09:30.916 "write_zeroes": true, 00:09:30.916 "zcopy": true, 00:09:30.916 "get_zone_info": false, 00:09:30.916 "zone_management": false, 00:09:30.916 "zone_append": false, 00:09:30.916 "compare": false, 00:09:30.916 "compare_and_write": false, 00:09:30.916 "abort": true, 00:09:30.916 "seek_hole": false, 00:09:30.916 "seek_data": false, 00:09:30.916 "copy": true, 00:09:30.916 "nvme_iov_md": false 00:09:30.916 }, 00:09:30.916 "memory_domains": [ 00:09:30.916 { 00:09:30.916 "dma_device_id": "system", 00:09:30.916 "dma_device_type": 1 00:09:30.916 }, 00:09:30.916 { 00:09:30.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.916 "dma_device_type": 2 00:09:30.916 } 00:09:30.916 ], 00:09:30.916 "driver_specific": {} 00:09:30.916 } 00:09:30.916 ] 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.916 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.917 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.917 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.917 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.917 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.917 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.917 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.917 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.917 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.917 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.917 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.917 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.177 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.177 "name": "Existed_Raid", 00:09:31.177 "uuid": "4bc57054-a887-41e7-b355-1bda5783c94b", 00:09:31.177 "strip_size_kb": 0, 00:09:31.177 "state": "configuring", 00:09:31.177 "raid_level": "raid1", 00:09:31.177 "superblock": true, 00:09:31.177 "num_base_bdevs": 3, 00:09:31.177 "num_base_bdevs_discovered": 1, 00:09:31.177 "num_base_bdevs_operational": 3, 00:09:31.177 "base_bdevs_list": [ 00:09:31.177 { 00:09:31.177 "name": "BaseBdev1", 00:09:31.177 "uuid": "ab436a7e-5e6a-4f3e-802a-133984ace95f", 00:09:31.177 "is_configured": true, 00:09:31.177 "data_offset": 2048, 00:09:31.177 "data_size": 63488 00:09:31.177 }, 00:09:31.177 { 00:09:31.177 "name": "BaseBdev2", 00:09:31.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.177 "is_configured": false, 00:09:31.177 "data_offset": 0, 00:09:31.177 "data_size": 0 00:09:31.177 }, 00:09:31.177 { 00:09:31.177 "name": "BaseBdev3", 00:09:31.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.177 "is_configured": false, 00:09:31.177 "data_offset": 0, 00:09:31.177 "data_size": 0 00:09:31.177 } 00:09:31.177 ] 00:09:31.177 }' 00:09:31.177 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.177 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.437 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:31.437 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.437 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.437 [2024-11-18 03:09:34.913058] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:31.437 [2024-11-18 03:09:34.913151] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:31.437 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.437 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:31.437 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.437 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.437 [2024-11-18 03:09:34.925055] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:31.437 [2024-11-18 03:09:34.926926] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:31.437 [2024-11-18 03:09:34.927011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:31.437 [2024-11-18 03:09:34.927057] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:31.437 [2024-11-18 03:09:34.927082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:31.437 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.437 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:31.437 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:31.437 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:31.437 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.437 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.437 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.437 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.437 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.437 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.437 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.437 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.437 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.437 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.437 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.437 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.437 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.437 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.437 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.437 "name": "Existed_Raid", 00:09:31.437 "uuid": "a17d594f-c743-4b4b-9006-1d1d86993d0b", 00:09:31.437 "strip_size_kb": 0, 00:09:31.437 "state": "configuring", 00:09:31.437 "raid_level": "raid1", 00:09:31.437 "superblock": true, 00:09:31.437 "num_base_bdevs": 3, 00:09:31.437 "num_base_bdevs_discovered": 1, 00:09:31.437 "num_base_bdevs_operational": 3, 00:09:31.437 "base_bdevs_list": [ 00:09:31.437 { 00:09:31.437 "name": "BaseBdev1", 00:09:31.437 "uuid": "ab436a7e-5e6a-4f3e-802a-133984ace95f", 00:09:31.437 "is_configured": true, 00:09:31.437 "data_offset": 2048, 00:09:31.437 "data_size": 63488 00:09:31.437 }, 00:09:31.437 { 00:09:31.437 "name": "BaseBdev2", 00:09:31.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.438 "is_configured": false, 00:09:31.438 "data_offset": 0, 00:09:31.438 "data_size": 0 00:09:31.438 }, 00:09:31.438 { 00:09:31.438 "name": "BaseBdev3", 00:09:31.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.438 "is_configured": false, 00:09:31.438 "data_offset": 0, 00:09:31.438 "data_size": 0 00:09:31.438 } 00:09:31.438 ] 00:09:31.438 }' 00:09:31.438 03:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.438 03:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.033 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:32.033 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.033 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.033 [2024-11-18 03:09:35.425610] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:32.033 BaseBdev2 00:09:32.033 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.033 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:32.033 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:32.033 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:32.033 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:32.033 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:32.033 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:32.033 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:32.033 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.033 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.033 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.033 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:32.033 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.033 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.033 [ 00:09:32.033 { 00:09:32.033 "name": "BaseBdev2", 00:09:32.033 "aliases": [ 00:09:32.033 "b70b9f96-f016-4dda-a79a-7a506aeb8c8a" 00:09:32.033 ], 00:09:32.033 "product_name": "Malloc disk", 00:09:32.033 "block_size": 512, 00:09:32.033 "num_blocks": 65536, 00:09:32.033 "uuid": "b70b9f96-f016-4dda-a79a-7a506aeb8c8a", 00:09:32.033 "assigned_rate_limits": { 00:09:32.033 "rw_ios_per_sec": 0, 00:09:32.033 "rw_mbytes_per_sec": 0, 00:09:32.033 "r_mbytes_per_sec": 0, 00:09:32.033 "w_mbytes_per_sec": 0 00:09:32.033 }, 00:09:32.033 "claimed": true, 00:09:32.033 "claim_type": "exclusive_write", 00:09:32.033 "zoned": false, 00:09:32.033 "supported_io_types": { 00:09:32.033 "read": true, 00:09:32.033 "write": true, 00:09:32.033 "unmap": true, 00:09:32.033 "flush": true, 00:09:32.033 "reset": true, 00:09:32.033 "nvme_admin": false, 00:09:32.033 "nvme_io": false, 00:09:32.033 "nvme_io_md": false, 00:09:32.033 "write_zeroes": true, 00:09:32.033 "zcopy": true, 00:09:32.033 "get_zone_info": false, 00:09:32.033 "zone_management": false, 00:09:32.033 "zone_append": false, 00:09:32.033 "compare": false, 00:09:32.033 "compare_and_write": false, 00:09:32.033 "abort": true, 00:09:32.033 "seek_hole": false, 00:09:32.033 "seek_data": false, 00:09:32.033 "copy": true, 00:09:32.033 "nvme_iov_md": false 00:09:32.033 }, 00:09:32.033 "memory_domains": [ 00:09:32.033 { 00:09:32.033 "dma_device_id": "system", 00:09:32.033 "dma_device_type": 1 00:09:32.033 }, 00:09:32.033 { 00:09:32.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.033 "dma_device_type": 2 00:09:32.033 } 00:09:32.033 ], 00:09:32.033 "driver_specific": {} 00:09:32.033 } 00:09:32.033 ] 00:09:32.033 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.033 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:32.033 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:32.033 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:32.033 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:32.033 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.033 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.033 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.033 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.034 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.034 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.034 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.034 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.034 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.034 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.034 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.034 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.034 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.034 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.034 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.034 "name": "Existed_Raid", 00:09:32.034 "uuid": "a17d594f-c743-4b4b-9006-1d1d86993d0b", 00:09:32.034 "strip_size_kb": 0, 00:09:32.034 "state": "configuring", 00:09:32.034 "raid_level": "raid1", 00:09:32.034 "superblock": true, 00:09:32.034 "num_base_bdevs": 3, 00:09:32.034 "num_base_bdevs_discovered": 2, 00:09:32.034 "num_base_bdevs_operational": 3, 00:09:32.034 "base_bdevs_list": [ 00:09:32.034 { 00:09:32.034 "name": "BaseBdev1", 00:09:32.034 "uuid": "ab436a7e-5e6a-4f3e-802a-133984ace95f", 00:09:32.034 "is_configured": true, 00:09:32.034 "data_offset": 2048, 00:09:32.034 "data_size": 63488 00:09:32.034 }, 00:09:32.034 { 00:09:32.034 "name": "BaseBdev2", 00:09:32.034 "uuid": "b70b9f96-f016-4dda-a79a-7a506aeb8c8a", 00:09:32.034 "is_configured": true, 00:09:32.034 "data_offset": 2048, 00:09:32.034 "data_size": 63488 00:09:32.034 }, 00:09:32.034 { 00:09:32.034 "name": "BaseBdev3", 00:09:32.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.034 "is_configured": false, 00:09:32.034 "data_offset": 0, 00:09:32.034 "data_size": 0 00:09:32.034 } 00:09:32.034 ] 00:09:32.034 }' 00:09:32.034 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.034 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.603 [2024-11-18 03:09:35.904030] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:32.603 [2024-11-18 03:09:35.904342] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:32.603 [2024-11-18 03:09:35.904407] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:32.603 BaseBdev3 00:09:32.603 [2024-11-18 03:09:35.904763] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:32.603 [2024-11-18 03:09:35.904907] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:32.603 [2024-11-18 03:09:35.904983] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:32.603 [2024-11-18 03:09:35.905159] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.603 [ 00:09:32.603 { 00:09:32.603 "name": "BaseBdev3", 00:09:32.603 "aliases": [ 00:09:32.603 "1abeada3-5e46-4ddb-84bf-3e6d823141bd" 00:09:32.603 ], 00:09:32.603 "product_name": "Malloc disk", 00:09:32.603 "block_size": 512, 00:09:32.603 "num_blocks": 65536, 00:09:32.603 "uuid": "1abeada3-5e46-4ddb-84bf-3e6d823141bd", 00:09:32.603 "assigned_rate_limits": { 00:09:32.603 "rw_ios_per_sec": 0, 00:09:32.603 "rw_mbytes_per_sec": 0, 00:09:32.603 "r_mbytes_per_sec": 0, 00:09:32.603 "w_mbytes_per_sec": 0 00:09:32.603 }, 00:09:32.603 "claimed": true, 00:09:32.603 "claim_type": "exclusive_write", 00:09:32.603 "zoned": false, 00:09:32.603 "supported_io_types": { 00:09:32.603 "read": true, 00:09:32.603 "write": true, 00:09:32.603 "unmap": true, 00:09:32.603 "flush": true, 00:09:32.603 "reset": true, 00:09:32.603 "nvme_admin": false, 00:09:32.603 "nvme_io": false, 00:09:32.603 "nvme_io_md": false, 00:09:32.603 "write_zeroes": true, 00:09:32.603 "zcopy": true, 00:09:32.603 "get_zone_info": false, 00:09:32.603 "zone_management": false, 00:09:32.603 "zone_append": false, 00:09:32.603 "compare": false, 00:09:32.603 "compare_and_write": false, 00:09:32.603 "abort": true, 00:09:32.603 "seek_hole": false, 00:09:32.603 "seek_data": false, 00:09:32.603 "copy": true, 00:09:32.603 "nvme_iov_md": false 00:09:32.603 }, 00:09:32.603 "memory_domains": [ 00:09:32.603 { 00:09:32.603 "dma_device_id": "system", 00:09:32.603 "dma_device_type": 1 00:09:32.603 }, 00:09:32.603 { 00:09:32.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.603 "dma_device_type": 2 00:09:32.603 } 00:09:32.603 ], 00:09:32.603 "driver_specific": {} 00:09:32.603 } 00:09:32.603 ] 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.603 "name": "Existed_Raid", 00:09:32.603 "uuid": "a17d594f-c743-4b4b-9006-1d1d86993d0b", 00:09:32.603 "strip_size_kb": 0, 00:09:32.603 "state": "online", 00:09:32.603 "raid_level": "raid1", 00:09:32.603 "superblock": true, 00:09:32.603 "num_base_bdevs": 3, 00:09:32.603 "num_base_bdevs_discovered": 3, 00:09:32.603 "num_base_bdevs_operational": 3, 00:09:32.603 "base_bdevs_list": [ 00:09:32.603 { 00:09:32.603 "name": "BaseBdev1", 00:09:32.603 "uuid": "ab436a7e-5e6a-4f3e-802a-133984ace95f", 00:09:32.603 "is_configured": true, 00:09:32.603 "data_offset": 2048, 00:09:32.603 "data_size": 63488 00:09:32.603 }, 00:09:32.603 { 00:09:32.603 "name": "BaseBdev2", 00:09:32.603 "uuid": "b70b9f96-f016-4dda-a79a-7a506aeb8c8a", 00:09:32.603 "is_configured": true, 00:09:32.603 "data_offset": 2048, 00:09:32.603 "data_size": 63488 00:09:32.603 }, 00:09:32.603 { 00:09:32.603 "name": "BaseBdev3", 00:09:32.603 "uuid": "1abeada3-5e46-4ddb-84bf-3e6d823141bd", 00:09:32.603 "is_configured": true, 00:09:32.603 "data_offset": 2048, 00:09:32.603 "data_size": 63488 00:09:32.603 } 00:09:32.603 ] 00:09:32.603 }' 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.603 03:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.863 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:32.863 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:32.863 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:32.863 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:32.863 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:32.863 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:32.863 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:32.863 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:32.863 03:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.863 03:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.863 [2024-11-18 03:09:36.427565] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:33.123 03:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.123 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:33.123 "name": "Existed_Raid", 00:09:33.123 "aliases": [ 00:09:33.123 "a17d594f-c743-4b4b-9006-1d1d86993d0b" 00:09:33.123 ], 00:09:33.123 "product_name": "Raid Volume", 00:09:33.123 "block_size": 512, 00:09:33.123 "num_blocks": 63488, 00:09:33.123 "uuid": "a17d594f-c743-4b4b-9006-1d1d86993d0b", 00:09:33.123 "assigned_rate_limits": { 00:09:33.123 "rw_ios_per_sec": 0, 00:09:33.123 "rw_mbytes_per_sec": 0, 00:09:33.123 "r_mbytes_per_sec": 0, 00:09:33.123 "w_mbytes_per_sec": 0 00:09:33.123 }, 00:09:33.123 "claimed": false, 00:09:33.123 "zoned": false, 00:09:33.123 "supported_io_types": { 00:09:33.123 "read": true, 00:09:33.123 "write": true, 00:09:33.123 "unmap": false, 00:09:33.123 "flush": false, 00:09:33.123 "reset": true, 00:09:33.123 "nvme_admin": false, 00:09:33.123 "nvme_io": false, 00:09:33.123 "nvme_io_md": false, 00:09:33.123 "write_zeroes": true, 00:09:33.123 "zcopy": false, 00:09:33.123 "get_zone_info": false, 00:09:33.123 "zone_management": false, 00:09:33.123 "zone_append": false, 00:09:33.123 "compare": false, 00:09:33.123 "compare_and_write": false, 00:09:33.123 "abort": false, 00:09:33.123 "seek_hole": false, 00:09:33.123 "seek_data": false, 00:09:33.123 "copy": false, 00:09:33.123 "nvme_iov_md": false 00:09:33.123 }, 00:09:33.123 "memory_domains": [ 00:09:33.123 { 00:09:33.123 "dma_device_id": "system", 00:09:33.123 "dma_device_type": 1 00:09:33.123 }, 00:09:33.123 { 00:09:33.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.123 "dma_device_type": 2 00:09:33.123 }, 00:09:33.123 { 00:09:33.123 "dma_device_id": "system", 00:09:33.123 "dma_device_type": 1 00:09:33.123 }, 00:09:33.123 { 00:09:33.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.123 "dma_device_type": 2 00:09:33.123 }, 00:09:33.123 { 00:09:33.123 "dma_device_id": "system", 00:09:33.123 "dma_device_type": 1 00:09:33.123 }, 00:09:33.123 { 00:09:33.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.123 "dma_device_type": 2 00:09:33.123 } 00:09:33.123 ], 00:09:33.123 "driver_specific": { 00:09:33.123 "raid": { 00:09:33.123 "uuid": "a17d594f-c743-4b4b-9006-1d1d86993d0b", 00:09:33.123 "strip_size_kb": 0, 00:09:33.123 "state": "online", 00:09:33.123 "raid_level": "raid1", 00:09:33.123 "superblock": true, 00:09:33.123 "num_base_bdevs": 3, 00:09:33.123 "num_base_bdevs_discovered": 3, 00:09:33.123 "num_base_bdevs_operational": 3, 00:09:33.123 "base_bdevs_list": [ 00:09:33.123 { 00:09:33.123 "name": "BaseBdev1", 00:09:33.123 "uuid": "ab436a7e-5e6a-4f3e-802a-133984ace95f", 00:09:33.123 "is_configured": true, 00:09:33.123 "data_offset": 2048, 00:09:33.123 "data_size": 63488 00:09:33.123 }, 00:09:33.123 { 00:09:33.123 "name": "BaseBdev2", 00:09:33.123 "uuid": "b70b9f96-f016-4dda-a79a-7a506aeb8c8a", 00:09:33.123 "is_configured": true, 00:09:33.123 "data_offset": 2048, 00:09:33.123 "data_size": 63488 00:09:33.123 }, 00:09:33.123 { 00:09:33.123 "name": "BaseBdev3", 00:09:33.124 "uuid": "1abeada3-5e46-4ddb-84bf-3e6d823141bd", 00:09:33.124 "is_configured": true, 00:09:33.124 "data_offset": 2048, 00:09:33.124 "data_size": 63488 00:09:33.124 } 00:09:33.124 ] 00:09:33.124 } 00:09:33.124 } 00:09:33.124 }' 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:33.124 BaseBdev2 00:09:33.124 BaseBdev3' 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.124 03:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.385 [2024-11-18 03:09:36.698818] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:33.385 03:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.385 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:33.385 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:33.385 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:33.385 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:33.385 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:33.385 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:33.385 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.385 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.385 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.385 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.385 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:33.385 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.385 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.385 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.385 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.385 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.385 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.385 03:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.385 03:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.385 03:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.385 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.385 "name": "Existed_Raid", 00:09:33.385 "uuid": "a17d594f-c743-4b4b-9006-1d1d86993d0b", 00:09:33.385 "strip_size_kb": 0, 00:09:33.385 "state": "online", 00:09:33.385 "raid_level": "raid1", 00:09:33.385 "superblock": true, 00:09:33.385 "num_base_bdevs": 3, 00:09:33.385 "num_base_bdevs_discovered": 2, 00:09:33.385 "num_base_bdevs_operational": 2, 00:09:33.385 "base_bdevs_list": [ 00:09:33.385 { 00:09:33.385 "name": null, 00:09:33.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.385 "is_configured": false, 00:09:33.385 "data_offset": 0, 00:09:33.385 "data_size": 63488 00:09:33.385 }, 00:09:33.385 { 00:09:33.385 "name": "BaseBdev2", 00:09:33.385 "uuid": "b70b9f96-f016-4dda-a79a-7a506aeb8c8a", 00:09:33.385 "is_configured": true, 00:09:33.385 "data_offset": 2048, 00:09:33.385 "data_size": 63488 00:09:33.385 }, 00:09:33.385 { 00:09:33.385 "name": "BaseBdev3", 00:09:33.385 "uuid": "1abeada3-5e46-4ddb-84bf-3e6d823141bd", 00:09:33.385 "is_configured": true, 00:09:33.385 "data_offset": 2048, 00:09:33.385 "data_size": 63488 00:09:33.385 } 00:09:33.385 ] 00:09:33.385 }' 00:09:33.385 03:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.385 03:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.690 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:33.690 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:33.690 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.690 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:33.690 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.690 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.690 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.690 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:33.690 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:33.690 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:33.690 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.690 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.690 [2024-11-18 03:09:37.202213] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:33.690 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.690 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:33.690 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:33.690 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.690 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:33.690 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.690 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.690 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.950 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:33.950 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:33.950 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:33.950 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.950 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.950 [2024-11-18 03:09:37.274034] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:33.950 [2024-11-18 03:09:37.274205] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:33.950 [2024-11-18 03:09:37.286198] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:33.950 [2024-11-18 03:09:37.286314] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:33.950 [2024-11-18 03:09:37.286364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:33.950 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.950 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:33.950 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:33.950 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.950 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:33.950 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.950 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.950 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.950 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:33.950 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:33.950 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:33.950 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.951 BaseBdev2 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.951 [ 00:09:33.951 { 00:09:33.951 "name": "BaseBdev2", 00:09:33.951 "aliases": [ 00:09:33.951 "e0c5414a-4195-40fb-8477-7ce142c0b9e7" 00:09:33.951 ], 00:09:33.951 "product_name": "Malloc disk", 00:09:33.951 "block_size": 512, 00:09:33.951 "num_blocks": 65536, 00:09:33.951 "uuid": "e0c5414a-4195-40fb-8477-7ce142c0b9e7", 00:09:33.951 "assigned_rate_limits": { 00:09:33.951 "rw_ios_per_sec": 0, 00:09:33.951 "rw_mbytes_per_sec": 0, 00:09:33.951 "r_mbytes_per_sec": 0, 00:09:33.951 "w_mbytes_per_sec": 0 00:09:33.951 }, 00:09:33.951 "claimed": false, 00:09:33.951 "zoned": false, 00:09:33.951 "supported_io_types": { 00:09:33.951 "read": true, 00:09:33.951 "write": true, 00:09:33.951 "unmap": true, 00:09:33.951 "flush": true, 00:09:33.951 "reset": true, 00:09:33.951 "nvme_admin": false, 00:09:33.951 "nvme_io": false, 00:09:33.951 "nvme_io_md": false, 00:09:33.951 "write_zeroes": true, 00:09:33.951 "zcopy": true, 00:09:33.951 "get_zone_info": false, 00:09:33.951 "zone_management": false, 00:09:33.951 "zone_append": false, 00:09:33.951 "compare": false, 00:09:33.951 "compare_and_write": false, 00:09:33.951 "abort": true, 00:09:33.951 "seek_hole": false, 00:09:33.951 "seek_data": false, 00:09:33.951 "copy": true, 00:09:33.951 "nvme_iov_md": false 00:09:33.951 }, 00:09:33.951 "memory_domains": [ 00:09:33.951 { 00:09:33.951 "dma_device_id": "system", 00:09:33.951 "dma_device_type": 1 00:09:33.951 }, 00:09:33.951 { 00:09:33.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.951 "dma_device_type": 2 00:09:33.951 } 00:09:33.951 ], 00:09:33.951 "driver_specific": {} 00:09:33.951 } 00:09:33.951 ] 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.951 BaseBdev3 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.951 [ 00:09:33.951 { 00:09:33.951 "name": "BaseBdev3", 00:09:33.951 "aliases": [ 00:09:33.951 "b226b60f-0729-441d-8e43-f498123a2dd1" 00:09:33.951 ], 00:09:33.951 "product_name": "Malloc disk", 00:09:33.951 "block_size": 512, 00:09:33.951 "num_blocks": 65536, 00:09:33.951 "uuid": "b226b60f-0729-441d-8e43-f498123a2dd1", 00:09:33.951 "assigned_rate_limits": { 00:09:33.951 "rw_ios_per_sec": 0, 00:09:33.951 "rw_mbytes_per_sec": 0, 00:09:33.951 "r_mbytes_per_sec": 0, 00:09:33.951 "w_mbytes_per_sec": 0 00:09:33.951 }, 00:09:33.951 "claimed": false, 00:09:33.951 "zoned": false, 00:09:33.951 "supported_io_types": { 00:09:33.951 "read": true, 00:09:33.951 "write": true, 00:09:33.951 "unmap": true, 00:09:33.951 "flush": true, 00:09:33.951 "reset": true, 00:09:33.951 "nvme_admin": false, 00:09:33.951 "nvme_io": false, 00:09:33.951 "nvme_io_md": false, 00:09:33.951 "write_zeroes": true, 00:09:33.951 "zcopy": true, 00:09:33.951 "get_zone_info": false, 00:09:33.951 "zone_management": false, 00:09:33.951 "zone_append": false, 00:09:33.951 "compare": false, 00:09:33.951 "compare_and_write": false, 00:09:33.951 "abort": true, 00:09:33.951 "seek_hole": false, 00:09:33.951 "seek_data": false, 00:09:33.951 "copy": true, 00:09:33.951 "nvme_iov_md": false 00:09:33.951 }, 00:09:33.951 "memory_domains": [ 00:09:33.951 { 00:09:33.951 "dma_device_id": "system", 00:09:33.951 "dma_device_type": 1 00:09:33.951 }, 00:09:33.951 { 00:09:33.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.951 "dma_device_type": 2 00:09:33.951 } 00:09:33.951 ], 00:09:33.951 "driver_specific": {} 00:09:33.951 } 00:09:33.951 ] 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.951 [2024-11-18 03:09:37.452283] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:33.951 [2024-11-18 03:09:37.452383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:33.951 [2024-11-18 03:09:37.452430] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:33.951 [2024-11-18 03:09:37.454319] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.951 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.952 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.952 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.952 "name": "Existed_Raid", 00:09:33.952 "uuid": "ba746a50-8516-421b-a340-fe45898ece74", 00:09:33.952 "strip_size_kb": 0, 00:09:33.952 "state": "configuring", 00:09:33.952 "raid_level": "raid1", 00:09:33.952 "superblock": true, 00:09:33.952 "num_base_bdevs": 3, 00:09:33.952 "num_base_bdevs_discovered": 2, 00:09:33.952 "num_base_bdevs_operational": 3, 00:09:33.952 "base_bdevs_list": [ 00:09:33.952 { 00:09:33.952 "name": "BaseBdev1", 00:09:33.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.952 "is_configured": false, 00:09:33.952 "data_offset": 0, 00:09:33.952 "data_size": 0 00:09:33.952 }, 00:09:33.952 { 00:09:33.952 "name": "BaseBdev2", 00:09:33.952 "uuid": "e0c5414a-4195-40fb-8477-7ce142c0b9e7", 00:09:33.952 "is_configured": true, 00:09:33.952 "data_offset": 2048, 00:09:33.952 "data_size": 63488 00:09:33.952 }, 00:09:33.952 { 00:09:33.952 "name": "BaseBdev3", 00:09:33.952 "uuid": "b226b60f-0729-441d-8e43-f498123a2dd1", 00:09:33.952 "is_configured": true, 00:09:33.952 "data_offset": 2048, 00:09:33.952 "data_size": 63488 00:09:33.952 } 00:09:33.952 ] 00:09:33.952 }' 00:09:33.952 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.952 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.520 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:34.520 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.520 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.520 [2024-11-18 03:09:37.867600] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:34.520 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.520 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:34.520 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.520 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.520 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.520 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.520 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.520 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.520 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.520 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.520 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.520 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.520 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.520 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.520 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.520 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.520 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.520 "name": "Existed_Raid", 00:09:34.520 "uuid": "ba746a50-8516-421b-a340-fe45898ece74", 00:09:34.520 "strip_size_kb": 0, 00:09:34.520 "state": "configuring", 00:09:34.520 "raid_level": "raid1", 00:09:34.520 "superblock": true, 00:09:34.520 "num_base_bdevs": 3, 00:09:34.520 "num_base_bdevs_discovered": 1, 00:09:34.520 "num_base_bdevs_operational": 3, 00:09:34.520 "base_bdevs_list": [ 00:09:34.520 { 00:09:34.520 "name": "BaseBdev1", 00:09:34.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.520 "is_configured": false, 00:09:34.520 "data_offset": 0, 00:09:34.520 "data_size": 0 00:09:34.520 }, 00:09:34.520 { 00:09:34.520 "name": null, 00:09:34.520 "uuid": "e0c5414a-4195-40fb-8477-7ce142c0b9e7", 00:09:34.520 "is_configured": false, 00:09:34.520 "data_offset": 0, 00:09:34.520 "data_size": 63488 00:09:34.520 }, 00:09:34.520 { 00:09:34.520 "name": "BaseBdev3", 00:09:34.520 "uuid": "b226b60f-0729-441d-8e43-f498123a2dd1", 00:09:34.520 "is_configured": true, 00:09:34.520 "data_offset": 2048, 00:09:34.520 "data_size": 63488 00:09:34.520 } 00:09:34.520 ] 00:09:34.520 }' 00:09:34.520 03:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.520 03:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.780 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.780 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:34.780 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.780 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.780 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.040 [2024-11-18 03:09:38.370005] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:35.040 BaseBdev1 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.040 [ 00:09:35.040 { 00:09:35.040 "name": "BaseBdev1", 00:09:35.040 "aliases": [ 00:09:35.040 "05e9b204-4938-4ab2-acb4-0345c2dbe6c9" 00:09:35.040 ], 00:09:35.040 "product_name": "Malloc disk", 00:09:35.040 "block_size": 512, 00:09:35.040 "num_blocks": 65536, 00:09:35.040 "uuid": "05e9b204-4938-4ab2-acb4-0345c2dbe6c9", 00:09:35.040 "assigned_rate_limits": { 00:09:35.040 "rw_ios_per_sec": 0, 00:09:35.040 "rw_mbytes_per_sec": 0, 00:09:35.040 "r_mbytes_per_sec": 0, 00:09:35.040 "w_mbytes_per_sec": 0 00:09:35.040 }, 00:09:35.040 "claimed": true, 00:09:35.040 "claim_type": "exclusive_write", 00:09:35.040 "zoned": false, 00:09:35.040 "supported_io_types": { 00:09:35.040 "read": true, 00:09:35.040 "write": true, 00:09:35.040 "unmap": true, 00:09:35.040 "flush": true, 00:09:35.040 "reset": true, 00:09:35.040 "nvme_admin": false, 00:09:35.040 "nvme_io": false, 00:09:35.040 "nvme_io_md": false, 00:09:35.040 "write_zeroes": true, 00:09:35.040 "zcopy": true, 00:09:35.040 "get_zone_info": false, 00:09:35.040 "zone_management": false, 00:09:35.040 "zone_append": false, 00:09:35.040 "compare": false, 00:09:35.040 "compare_and_write": false, 00:09:35.040 "abort": true, 00:09:35.040 "seek_hole": false, 00:09:35.040 "seek_data": false, 00:09:35.040 "copy": true, 00:09:35.040 "nvme_iov_md": false 00:09:35.040 }, 00:09:35.040 "memory_domains": [ 00:09:35.040 { 00:09:35.040 "dma_device_id": "system", 00:09:35.040 "dma_device_type": 1 00:09:35.040 }, 00:09:35.040 { 00:09:35.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.040 "dma_device_type": 2 00:09:35.040 } 00:09:35.040 ], 00:09:35.040 "driver_specific": {} 00:09:35.040 } 00:09:35.040 ] 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.040 "name": "Existed_Raid", 00:09:35.040 "uuid": "ba746a50-8516-421b-a340-fe45898ece74", 00:09:35.040 "strip_size_kb": 0, 00:09:35.040 "state": "configuring", 00:09:35.040 "raid_level": "raid1", 00:09:35.040 "superblock": true, 00:09:35.040 "num_base_bdevs": 3, 00:09:35.040 "num_base_bdevs_discovered": 2, 00:09:35.040 "num_base_bdevs_operational": 3, 00:09:35.040 "base_bdevs_list": [ 00:09:35.040 { 00:09:35.040 "name": "BaseBdev1", 00:09:35.040 "uuid": "05e9b204-4938-4ab2-acb4-0345c2dbe6c9", 00:09:35.040 "is_configured": true, 00:09:35.040 "data_offset": 2048, 00:09:35.040 "data_size": 63488 00:09:35.040 }, 00:09:35.040 { 00:09:35.040 "name": null, 00:09:35.040 "uuid": "e0c5414a-4195-40fb-8477-7ce142c0b9e7", 00:09:35.040 "is_configured": false, 00:09:35.040 "data_offset": 0, 00:09:35.040 "data_size": 63488 00:09:35.040 }, 00:09:35.040 { 00:09:35.040 "name": "BaseBdev3", 00:09:35.040 "uuid": "b226b60f-0729-441d-8e43-f498123a2dd1", 00:09:35.040 "is_configured": true, 00:09:35.040 "data_offset": 2048, 00:09:35.040 "data_size": 63488 00:09:35.040 } 00:09:35.040 ] 00:09:35.040 }' 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.040 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.299 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.299 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:35.299 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.299 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.559 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.559 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:35.559 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:35.559 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.559 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.559 [2024-11-18 03:09:38.909122] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:35.559 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.559 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:35.559 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.559 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.559 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.559 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.559 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.559 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.559 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.559 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.559 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.559 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.559 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.559 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.559 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.559 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.559 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.559 "name": "Existed_Raid", 00:09:35.559 "uuid": "ba746a50-8516-421b-a340-fe45898ece74", 00:09:35.559 "strip_size_kb": 0, 00:09:35.559 "state": "configuring", 00:09:35.559 "raid_level": "raid1", 00:09:35.559 "superblock": true, 00:09:35.559 "num_base_bdevs": 3, 00:09:35.559 "num_base_bdevs_discovered": 1, 00:09:35.559 "num_base_bdevs_operational": 3, 00:09:35.559 "base_bdevs_list": [ 00:09:35.559 { 00:09:35.559 "name": "BaseBdev1", 00:09:35.559 "uuid": "05e9b204-4938-4ab2-acb4-0345c2dbe6c9", 00:09:35.559 "is_configured": true, 00:09:35.559 "data_offset": 2048, 00:09:35.559 "data_size": 63488 00:09:35.559 }, 00:09:35.559 { 00:09:35.559 "name": null, 00:09:35.559 "uuid": "e0c5414a-4195-40fb-8477-7ce142c0b9e7", 00:09:35.559 "is_configured": false, 00:09:35.559 "data_offset": 0, 00:09:35.559 "data_size": 63488 00:09:35.559 }, 00:09:35.559 { 00:09:35.559 "name": null, 00:09:35.559 "uuid": "b226b60f-0729-441d-8e43-f498123a2dd1", 00:09:35.559 "is_configured": false, 00:09:35.559 "data_offset": 0, 00:09:35.559 "data_size": 63488 00:09:35.559 } 00:09:35.559 ] 00:09:35.559 }' 00:09:35.559 03:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.559 03:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.816 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.816 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:35.816 03:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.816 03:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.816 03:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.073 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:36.073 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:36.073 03:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.073 03:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.073 [2024-11-18 03:09:39.412301] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:36.073 03:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.073 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:36.073 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.073 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.073 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.073 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.073 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.073 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.073 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.073 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.073 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.073 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.073 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.073 03:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.073 03:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.073 03:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.073 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.073 "name": "Existed_Raid", 00:09:36.073 "uuid": "ba746a50-8516-421b-a340-fe45898ece74", 00:09:36.073 "strip_size_kb": 0, 00:09:36.073 "state": "configuring", 00:09:36.073 "raid_level": "raid1", 00:09:36.073 "superblock": true, 00:09:36.073 "num_base_bdevs": 3, 00:09:36.073 "num_base_bdevs_discovered": 2, 00:09:36.073 "num_base_bdevs_operational": 3, 00:09:36.073 "base_bdevs_list": [ 00:09:36.073 { 00:09:36.073 "name": "BaseBdev1", 00:09:36.073 "uuid": "05e9b204-4938-4ab2-acb4-0345c2dbe6c9", 00:09:36.073 "is_configured": true, 00:09:36.073 "data_offset": 2048, 00:09:36.073 "data_size": 63488 00:09:36.073 }, 00:09:36.073 { 00:09:36.073 "name": null, 00:09:36.073 "uuid": "e0c5414a-4195-40fb-8477-7ce142c0b9e7", 00:09:36.073 "is_configured": false, 00:09:36.073 "data_offset": 0, 00:09:36.073 "data_size": 63488 00:09:36.073 }, 00:09:36.073 { 00:09:36.073 "name": "BaseBdev3", 00:09:36.073 "uuid": "b226b60f-0729-441d-8e43-f498123a2dd1", 00:09:36.073 "is_configured": true, 00:09:36.073 "data_offset": 2048, 00:09:36.073 "data_size": 63488 00:09:36.073 } 00:09:36.073 ] 00:09:36.073 }' 00:09:36.073 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.073 03:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.333 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.333 03:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.333 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:36.333 03:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.333 03:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.333 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:36.333 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:36.333 03:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.333 03:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.333 [2024-11-18 03:09:39.831578] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:36.333 03:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.333 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:36.333 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.333 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.333 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.333 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.333 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.333 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.333 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.333 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.333 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.333 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.333 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.333 03:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.333 03:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.333 03:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.333 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.333 "name": "Existed_Raid", 00:09:36.333 "uuid": "ba746a50-8516-421b-a340-fe45898ece74", 00:09:36.333 "strip_size_kb": 0, 00:09:36.333 "state": "configuring", 00:09:36.333 "raid_level": "raid1", 00:09:36.333 "superblock": true, 00:09:36.333 "num_base_bdevs": 3, 00:09:36.333 "num_base_bdevs_discovered": 1, 00:09:36.333 "num_base_bdevs_operational": 3, 00:09:36.334 "base_bdevs_list": [ 00:09:36.334 { 00:09:36.334 "name": null, 00:09:36.334 "uuid": "05e9b204-4938-4ab2-acb4-0345c2dbe6c9", 00:09:36.334 "is_configured": false, 00:09:36.334 "data_offset": 0, 00:09:36.334 "data_size": 63488 00:09:36.334 }, 00:09:36.334 { 00:09:36.334 "name": null, 00:09:36.334 "uuid": "e0c5414a-4195-40fb-8477-7ce142c0b9e7", 00:09:36.334 "is_configured": false, 00:09:36.334 "data_offset": 0, 00:09:36.334 "data_size": 63488 00:09:36.334 }, 00:09:36.334 { 00:09:36.334 "name": "BaseBdev3", 00:09:36.334 "uuid": "b226b60f-0729-441d-8e43-f498123a2dd1", 00:09:36.334 "is_configured": true, 00:09:36.334 "data_offset": 2048, 00:09:36.334 "data_size": 63488 00:09:36.334 } 00:09:36.334 ] 00:09:36.334 }' 00:09:36.334 03:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.334 03:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.902 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.902 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.902 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:36.902 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.902 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.902 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:36.903 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:36.903 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.903 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.903 [2024-11-18 03:09:40.297545] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:36.903 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.903 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:36.903 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.903 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.903 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.903 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.903 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.903 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.903 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.903 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.903 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.903 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.903 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.903 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.903 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.903 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.903 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.903 "name": "Existed_Raid", 00:09:36.903 "uuid": "ba746a50-8516-421b-a340-fe45898ece74", 00:09:36.903 "strip_size_kb": 0, 00:09:36.903 "state": "configuring", 00:09:36.903 "raid_level": "raid1", 00:09:36.903 "superblock": true, 00:09:36.903 "num_base_bdevs": 3, 00:09:36.903 "num_base_bdevs_discovered": 2, 00:09:36.903 "num_base_bdevs_operational": 3, 00:09:36.903 "base_bdevs_list": [ 00:09:36.903 { 00:09:36.903 "name": null, 00:09:36.903 "uuid": "05e9b204-4938-4ab2-acb4-0345c2dbe6c9", 00:09:36.903 "is_configured": false, 00:09:36.903 "data_offset": 0, 00:09:36.903 "data_size": 63488 00:09:36.903 }, 00:09:36.903 { 00:09:36.903 "name": "BaseBdev2", 00:09:36.903 "uuid": "e0c5414a-4195-40fb-8477-7ce142c0b9e7", 00:09:36.903 "is_configured": true, 00:09:36.903 "data_offset": 2048, 00:09:36.903 "data_size": 63488 00:09:36.903 }, 00:09:36.903 { 00:09:36.903 "name": "BaseBdev3", 00:09:36.903 "uuid": "b226b60f-0729-441d-8e43-f498123a2dd1", 00:09:36.903 "is_configured": true, 00:09:36.903 "data_offset": 2048, 00:09:36.903 "data_size": 63488 00:09:36.903 } 00:09:36.903 ] 00:09:36.903 }' 00:09:36.903 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.903 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.162 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:37.162 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.162 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.162 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.163 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.163 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:37.163 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.163 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.163 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.163 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:37.423 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.423 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 05e9b204-4938-4ab2-acb4-0345c2dbe6c9 00:09:37.423 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.423 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.423 [2024-11-18 03:09:40.791761] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:37.423 [2024-11-18 03:09:40.792042] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:37.423 NewBaseBdev 00:09:37.423 [2024-11-18 03:09:40.792078] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:37.423 [2024-11-18 03:09:40.792338] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:37.423 [2024-11-18 03:09:40.792467] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:37.423 [2024-11-18 03:09:40.792482] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:37.423 [2024-11-18 03:09:40.792580] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:37.423 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.423 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:37.423 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:37.423 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:37.423 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:37.423 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:37.423 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:37.423 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:37.423 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.423 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.423 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.423 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:37.423 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.423 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.423 [ 00:09:37.423 { 00:09:37.423 "name": "NewBaseBdev", 00:09:37.423 "aliases": [ 00:09:37.423 "05e9b204-4938-4ab2-acb4-0345c2dbe6c9" 00:09:37.423 ], 00:09:37.423 "product_name": "Malloc disk", 00:09:37.423 "block_size": 512, 00:09:37.423 "num_blocks": 65536, 00:09:37.423 "uuid": "05e9b204-4938-4ab2-acb4-0345c2dbe6c9", 00:09:37.423 "assigned_rate_limits": { 00:09:37.423 "rw_ios_per_sec": 0, 00:09:37.423 "rw_mbytes_per_sec": 0, 00:09:37.423 "r_mbytes_per_sec": 0, 00:09:37.423 "w_mbytes_per_sec": 0 00:09:37.423 }, 00:09:37.423 "claimed": true, 00:09:37.423 "claim_type": "exclusive_write", 00:09:37.423 "zoned": false, 00:09:37.423 "supported_io_types": { 00:09:37.423 "read": true, 00:09:37.423 "write": true, 00:09:37.423 "unmap": true, 00:09:37.423 "flush": true, 00:09:37.423 "reset": true, 00:09:37.423 "nvme_admin": false, 00:09:37.423 "nvme_io": false, 00:09:37.423 "nvme_io_md": false, 00:09:37.423 "write_zeroes": true, 00:09:37.423 "zcopy": true, 00:09:37.423 "get_zone_info": false, 00:09:37.423 "zone_management": false, 00:09:37.423 "zone_append": false, 00:09:37.423 "compare": false, 00:09:37.423 "compare_and_write": false, 00:09:37.423 "abort": true, 00:09:37.423 "seek_hole": false, 00:09:37.423 "seek_data": false, 00:09:37.423 "copy": true, 00:09:37.423 "nvme_iov_md": false 00:09:37.423 }, 00:09:37.423 "memory_domains": [ 00:09:37.423 { 00:09:37.423 "dma_device_id": "system", 00:09:37.423 "dma_device_type": 1 00:09:37.423 }, 00:09:37.423 { 00:09:37.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.423 "dma_device_type": 2 00:09:37.423 } 00:09:37.423 ], 00:09:37.423 "driver_specific": {} 00:09:37.423 } 00:09:37.423 ] 00:09:37.423 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.423 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:37.423 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:37.423 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.423 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:37.423 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.423 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.424 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.424 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.424 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.424 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.424 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.424 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.424 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.424 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.424 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.424 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.424 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.424 "name": "Existed_Raid", 00:09:37.424 "uuid": "ba746a50-8516-421b-a340-fe45898ece74", 00:09:37.424 "strip_size_kb": 0, 00:09:37.424 "state": "online", 00:09:37.424 "raid_level": "raid1", 00:09:37.424 "superblock": true, 00:09:37.424 "num_base_bdevs": 3, 00:09:37.424 "num_base_bdevs_discovered": 3, 00:09:37.424 "num_base_bdevs_operational": 3, 00:09:37.424 "base_bdevs_list": [ 00:09:37.424 { 00:09:37.424 "name": "NewBaseBdev", 00:09:37.424 "uuid": "05e9b204-4938-4ab2-acb4-0345c2dbe6c9", 00:09:37.424 "is_configured": true, 00:09:37.424 "data_offset": 2048, 00:09:37.424 "data_size": 63488 00:09:37.424 }, 00:09:37.424 { 00:09:37.424 "name": "BaseBdev2", 00:09:37.424 "uuid": "e0c5414a-4195-40fb-8477-7ce142c0b9e7", 00:09:37.424 "is_configured": true, 00:09:37.424 "data_offset": 2048, 00:09:37.424 "data_size": 63488 00:09:37.424 }, 00:09:37.424 { 00:09:37.424 "name": "BaseBdev3", 00:09:37.424 "uuid": "b226b60f-0729-441d-8e43-f498123a2dd1", 00:09:37.424 "is_configured": true, 00:09:37.424 "data_offset": 2048, 00:09:37.424 "data_size": 63488 00:09:37.424 } 00:09:37.424 ] 00:09:37.424 }' 00:09:37.424 03:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.424 03:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.684 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:37.684 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:37.684 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:37.684 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:37.684 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:37.684 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:37.684 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:37.684 03:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.684 03:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.684 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:37.684 [2024-11-18 03:09:41.223381] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:37.684 03:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.944 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:37.944 "name": "Existed_Raid", 00:09:37.944 "aliases": [ 00:09:37.944 "ba746a50-8516-421b-a340-fe45898ece74" 00:09:37.944 ], 00:09:37.944 "product_name": "Raid Volume", 00:09:37.944 "block_size": 512, 00:09:37.944 "num_blocks": 63488, 00:09:37.944 "uuid": "ba746a50-8516-421b-a340-fe45898ece74", 00:09:37.944 "assigned_rate_limits": { 00:09:37.944 "rw_ios_per_sec": 0, 00:09:37.944 "rw_mbytes_per_sec": 0, 00:09:37.944 "r_mbytes_per_sec": 0, 00:09:37.944 "w_mbytes_per_sec": 0 00:09:37.944 }, 00:09:37.944 "claimed": false, 00:09:37.944 "zoned": false, 00:09:37.944 "supported_io_types": { 00:09:37.944 "read": true, 00:09:37.944 "write": true, 00:09:37.944 "unmap": false, 00:09:37.944 "flush": false, 00:09:37.944 "reset": true, 00:09:37.944 "nvme_admin": false, 00:09:37.944 "nvme_io": false, 00:09:37.944 "nvme_io_md": false, 00:09:37.944 "write_zeroes": true, 00:09:37.944 "zcopy": false, 00:09:37.944 "get_zone_info": false, 00:09:37.944 "zone_management": false, 00:09:37.944 "zone_append": false, 00:09:37.944 "compare": false, 00:09:37.944 "compare_and_write": false, 00:09:37.944 "abort": false, 00:09:37.944 "seek_hole": false, 00:09:37.944 "seek_data": false, 00:09:37.944 "copy": false, 00:09:37.944 "nvme_iov_md": false 00:09:37.944 }, 00:09:37.944 "memory_domains": [ 00:09:37.944 { 00:09:37.944 "dma_device_id": "system", 00:09:37.944 "dma_device_type": 1 00:09:37.944 }, 00:09:37.944 { 00:09:37.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.944 "dma_device_type": 2 00:09:37.944 }, 00:09:37.944 { 00:09:37.944 "dma_device_id": "system", 00:09:37.944 "dma_device_type": 1 00:09:37.944 }, 00:09:37.944 { 00:09:37.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.944 "dma_device_type": 2 00:09:37.944 }, 00:09:37.944 { 00:09:37.944 "dma_device_id": "system", 00:09:37.944 "dma_device_type": 1 00:09:37.944 }, 00:09:37.944 { 00:09:37.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.944 "dma_device_type": 2 00:09:37.944 } 00:09:37.944 ], 00:09:37.944 "driver_specific": { 00:09:37.944 "raid": { 00:09:37.944 "uuid": "ba746a50-8516-421b-a340-fe45898ece74", 00:09:37.944 "strip_size_kb": 0, 00:09:37.944 "state": "online", 00:09:37.944 "raid_level": "raid1", 00:09:37.944 "superblock": true, 00:09:37.944 "num_base_bdevs": 3, 00:09:37.944 "num_base_bdevs_discovered": 3, 00:09:37.944 "num_base_bdevs_operational": 3, 00:09:37.944 "base_bdevs_list": [ 00:09:37.944 { 00:09:37.944 "name": "NewBaseBdev", 00:09:37.944 "uuid": "05e9b204-4938-4ab2-acb4-0345c2dbe6c9", 00:09:37.944 "is_configured": true, 00:09:37.944 "data_offset": 2048, 00:09:37.944 "data_size": 63488 00:09:37.944 }, 00:09:37.944 { 00:09:37.944 "name": "BaseBdev2", 00:09:37.944 "uuid": "e0c5414a-4195-40fb-8477-7ce142c0b9e7", 00:09:37.944 "is_configured": true, 00:09:37.944 "data_offset": 2048, 00:09:37.945 "data_size": 63488 00:09:37.945 }, 00:09:37.945 { 00:09:37.945 "name": "BaseBdev3", 00:09:37.945 "uuid": "b226b60f-0729-441d-8e43-f498123a2dd1", 00:09:37.945 "is_configured": true, 00:09:37.945 "data_offset": 2048, 00:09:37.945 "data_size": 63488 00:09:37.945 } 00:09:37.945 ] 00:09:37.945 } 00:09:37.945 } 00:09:37.945 }' 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:37.945 BaseBdev2 00:09:37.945 BaseBdev3' 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.945 03:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.945 [2024-11-18 03:09:41.518595] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:37.945 [2024-11-18 03:09:41.518625] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:37.945 [2024-11-18 03:09:41.518720] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.945 [2024-11-18 03:09:41.518959] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:37.945 [2024-11-18 03:09:41.518987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:38.204 03:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.205 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 79210 00:09:38.205 03:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 79210 ']' 00:09:38.205 03:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 79210 00:09:38.205 03:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:38.205 03:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:38.205 03:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79210 00:09:38.205 killing process with pid 79210 00:09:38.205 03:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:38.205 03:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:38.205 03:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79210' 00:09:38.205 03:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 79210 00:09:38.205 03:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 79210 00:09:38.205 [2024-11-18 03:09:41.567279] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:38.205 [2024-11-18 03:09:41.598867] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:38.465 03:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:38.465 00:09:38.465 real 0m8.896s 00:09:38.465 user 0m15.180s 00:09:38.465 sys 0m1.834s 00:09:38.465 03:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.465 ************************************ 00:09:38.465 END TEST raid_state_function_test_sb 00:09:38.465 ************************************ 00:09:38.465 03:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.465 03:09:41 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:38.465 03:09:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:38.465 03:09:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.465 03:09:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:38.465 ************************************ 00:09:38.465 START TEST raid_superblock_test 00:09:38.465 ************************************ 00:09:38.465 03:09:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:09:38.465 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:38.465 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:38.465 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:38.465 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:38.465 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:38.465 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:38.465 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:38.465 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:38.465 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:38.465 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:38.465 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:38.465 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:38.465 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:38.465 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:38.465 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:38.465 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79814 00:09:38.465 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:38.465 03:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79814 00:09:38.465 03:09:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 79814 ']' 00:09:38.465 03:09:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.465 03:09:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:38.465 03:09:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.465 03:09:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:38.465 03:09:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.465 [2024-11-18 03:09:41.989331] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:38.465 [2024-11-18 03:09:41.989552] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79814 ] 00:09:38.726 [2024-11-18 03:09:42.147517] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.726 [2024-11-18 03:09:42.197552] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.726 [2024-11-18 03:09:42.240223] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.726 [2024-11-18 03:09:42.240344] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.665 03:09:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:39.665 03:09:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:39.665 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:39.665 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:39.665 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:39.665 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:39.665 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:39.665 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:39.665 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:39.665 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:39.665 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:39.665 03:09:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.665 03:09:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.665 malloc1 00:09:39.665 03:09:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.665 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:39.665 03:09:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.665 03:09:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.666 [2024-11-18 03:09:42.894812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:39.666 [2024-11-18 03:09:42.894949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.666 [2024-11-18 03:09:42.895024] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:39.666 [2024-11-18 03:09:42.895126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.666 [2024-11-18 03:09:42.897408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.666 [2024-11-18 03:09:42.897496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:39.666 pt1 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.666 malloc2 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.666 [2024-11-18 03:09:42.938203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:39.666 [2024-11-18 03:09:42.938344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.666 [2024-11-18 03:09:42.938389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:39.666 [2024-11-18 03:09:42.938433] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.666 [2024-11-18 03:09:42.940994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.666 [2024-11-18 03:09:42.941075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:39.666 pt2 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.666 malloc3 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.666 [2024-11-18 03:09:42.967161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:39.666 [2024-11-18 03:09:42.967288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.666 [2024-11-18 03:09:42.967326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:39.666 [2024-11-18 03:09:42.967366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.666 [2024-11-18 03:09:42.969682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.666 [2024-11-18 03:09:42.969783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:39.666 pt3 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.666 [2024-11-18 03:09:42.979209] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:39.666 [2024-11-18 03:09:42.981293] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:39.666 [2024-11-18 03:09:42.981433] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:39.666 [2024-11-18 03:09:42.981630] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:39.666 [2024-11-18 03:09:42.981686] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:39.666 [2024-11-18 03:09:42.982048] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:39.666 [2024-11-18 03:09:42.982245] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:39.666 [2024-11-18 03:09:42.982304] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:09:39.666 [2024-11-18 03:09:42.982482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.666 03:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.666 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.666 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.666 "name": "raid_bdev1", 00:09:39.666 "uuid": "8397b232-4fc0-45f0-b066-8951a2284079", 00:09:39.666 "strip_size_kb": 0, 00:09:39.666 "state": "online", 00:09:39.666 "raid_level": "raid1", 00:09:39.666 "superblock": true, 00:09:39.666 "num_base_bdevs": 3, 00:09:39.666 "num_base_bdevs_discovered": 3, 00:09:39.666 "num_base_bdevs_operational": 3, 00:09:39.666 "base_bdevs_list": [ 00:09:39.666 { 00:09:39.666 "name": "pt1", 00:09:39.666 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:39.666 "is_configured": true, 00:09:39.666 "data_offset": 2048, 00:09:39.666 "data_size": 63488 00:09:39.666 }, 00:09:39.666 { 00:09:39.666 "name": "pt2", 00:09:39.666 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:39.666 "is_configured": true, 00:09:39.666 "data_offset": 2048, 00:09:39.666 "data_size": 63488 00:09:39.666 }, 00:09:39.666 { 00:09:39.666 "name": "pt3", 00:09:39.666 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:39.666 "is_configured": true, 00:09:39.666 "data_offset": 2048, 00:09:39.666 "data_size": 63488 00:09:39.666 } 00:09:39.666 ] 00:09:39.666 }' 00:09:39.666 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.666 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.927 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:39.927 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:39.927 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:39.927 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:39.927 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:39.927 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:39.927 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:39.927 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:39.927 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.927 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.927 [2024-11-18 03:09:43.426741] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.927 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.927 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:39.927 "name": "raid_bdev1", 00:09:39.927 "aliases": [ 00:09:39.927 "8397b232-4fc0-45f0-b066-8951a2284079" 00:09:39.927 ], 00:09:39.927 "product_name": "Raid Volume", 00:09:39.927 "block_size": 512, 00:09:39.927 "num_blocks": 63488, 00:09:39.927 "uuid": "8397b232-4fc0-45f0-b066-8951a2284079", 00:09:39.927 "assigned_rate_limits": { 00:09:39.927 "rw_ios_per_sec": 0, 00:09:39.927 "rw_mbytes_per_sec": 0, 00:09:39.927 "r_mbytes_per_sec": 0, 00:09:39.927 "w_mbytes_per_sec": 0 00:09:39.927 }, 00:09:39.927 "claimed": false, 00:09:39.927 "zoned": false, 00:09:39.927 "supported_io_types": { 00:09:39.927 "read": true, 00:09:39.927 "write": true, 00:09:39.927 "unmap": false, 00:09:39.927 "flush": false, 00:09:39.927 "reset": true, 00:09:39.927 "nvme_admin": false, 00:09:39.927 "nvme_io": false, 00:09:39.927 "nvme_io_md": false, 00:09:39.927 "write_zeroes": true, 00:09:39.927 "zcopy": false, 00:09:39.927 "get_zone_info": false, 00:09:39.927 "zone_management": false, 00:09:39.927 "zone_append": false, 00:09:39.927 "compare": false, 00:09:39.927 "compare_and_write": false, 00:09:39.927 "abort": false, 00:09:39.927 "seek_hole": false, 00:09:39.927 "seek_data": false, 00:09:39.927 "copy": false, 00:09:39.927 "nvme_iov_md": false 00:09:39.927 }, 00:09:39.927 "memory_domains": [ 00:09:39.927 { 00:09:39.927 "dma_device_id": "system", 00:09:39.927 "dma_device_type": 1 00:09:39.927 }, 00:09:39.927 { 00:09:39.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.927 "dma_device_type": 2 00:09:39.927 }, 00:09:39.927 { 00:09:39.927 "dma_device_id": "system", 00:09:39.927 "dma_device_type": 1 00:09:39.927 }, 00:09:39.927 { 00:09:39.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.927 "dma_device_type": 2 00:09:39.927 }, 00:09:39.927 { 00:09:39.927 "dma_device_id": "system", 00:09:39.927 "dma_device_type": 1 00:09:39.927 }, 00:09:39.927 { 00:09:39.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.927 "dma_device_type": 2 00:09:39.927 } 00:09:39.927 ], 00:09:39.927 "driver_specific": { 00:09:39.927 "raid": { 00:09:39.927 "uuid": "8397b232-4fc0-45f0-b066-8951a2284079", 00:09:39.927 "strip_size_kb": 0, 00:09:39.927 "state": "online", 00:09:39.927 "raid_level": "raid1", 00:09:39.927 "superblock": true, 00:09:39.927 "num_base_bdevs": 3, 00:09:39.927 "num_base_bdevs_discovered": 3, 00:09:39.927 "num_base_bdevs_operational": 3, 00:09:39.927 "base_bdevs_list": [ 00:09:39.927 { 00:09:39.927 "name": "pt1", 00:09:39.927 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:39.927 "is_configured": true, 00:09:39.927 "data_offset": 2048, 00:09:39.927 "data_size": 63488 00:09:39.927 }, 00:09:39.927 { 00:09:39.927 "name": "pt2", 00:09:39.927 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:39.927 "is_configured": true, 00:09:39.927 "data_offset": 2048, 00:09:39.927 "data_size": 63488 00:09:39.927 }, 00:09:39.927 { 00:09:39.927 "name": "pt3", 00:09:39.927 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:39.927 "is_configured": true, 00:09:39.927 "data_offset": 2048, 00:09:39.927 "data_size": 63488 00:09:39.927 } 00:09:39.927 ] 00:09:39.927 } 00:09:39.927 } 00:09:39.927 }' 00:09:39.927 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:39.927 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:39.927 pt2 00:09:39.927 pt3' 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.187 [2024-11-18 03:09:43.694293] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8397b232-4fc0-45f0-b066-8951a2284079 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8397b232-4fc0-45f0-b066-8951a2284079 ']' 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.187 [2024-11-18 03:09:43.737871] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:40.187 [2024-11-18 03:09:43.737946] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.187 [2024-11-18 03:09:43.738082] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.187 [2024-11-18 03:09:43.738188] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.187 [2024-11-18 03:09:43.738250] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.187 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.447 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.447 [2024-11-18 03:09:43.881628] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:40.447 [2024-11-18 03:09:43.883728] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:40.447 [2024-11-18 03:09:43.883823] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:40.447 [2024-11-18 03:09:43.883927] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:40.448 [2024-11-18 03:09:43.884043] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:40.448 [2024-11-18 03:09:43.884109] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:40.448 [2024-11-18 03:09:43.884198] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:40.448 [2024-11-18 03:09:43.884233] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:09:40.448 request: 00:09:40.448 { 00:09:40.448 "name": "raid_bdev1", 00:09:40.448 "raid_level": "raid1", 00:09:40.448 "base_bdevs": [ 00:09:40.448 "malloc1", 00:09:40.448 "malloc2", 00:09:40.448 "malloc3" 00:09:40.448 ], 00:09:40.448 "superblock": false, 00:09:40.448 "method": "bdev_raid_create", 00:09:40.448 "req_id": 1 00:09:40.448 } 00:09:40.448 Got JSON-RPC error response 00:09:40.448 response: 00:09:40.448 { 00:09:40.448 "code": -17, 00:09:40.448 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:40.448 } 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.448 [2024-11-18 03:09:43.933536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:40.448 [2024-11-18 03:09:43.933669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.448 [2024-11-18 03:09:43.933712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:40.448 [2024-11-18 03:09:43.933758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.448 [2024-11-18 03:09:43.936255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.448 [2024-11-18 03:09:43.936356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:40.448 [2024-11-18 03:09:43.936479] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:40.448 [2024-11-18 03:09:43.936586] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:40.448 pt1 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.448 "name": "raid_bdev1", 00:09:40.448 "uuid": "8397b232-4fc0-45f0-b066-8951a2284079", 00:09:40.448 "strip_size_kb": 0, 00:09:40.448 "state": "configuring", 00:09:40.448 "raid_level": "raid1", 00:09:40.448 "superblock": true, 00:09:40.448 "num_base_bdevs": 3, 00:09:40.448 "num_base_bdevs_discovered": 1, 00:09:40.448 "num_base_bdevs_operational": 3, 00:09:40.448 "base_bdevs_list": [ 00:09:40.448 { 00:09:40.448 "name": "pt1", 00:09:40.448 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:40.448 "is_configured": true, 00:09:40.448 "data_offset": 2048, 00:09:40.448 "data_size": 63488 00:09:40.448 }, 00:09:40.448 { 00:09:40.448 "name": null, 00:09:40.448 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:40.448 "is_configured": false, 00:09:40.448 "data_offset": 2048, 00:09:40.448 "data_size": 63488 00:09:40.448 }, 00:09:40.448 { 00:09:40.448 "name": null, 00:09:40.448 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:40.448 "is_configured": false, 00:09:40.448 "data_offset": 2048, 00:09:40.448 "data_size": 63488 00:09:40.448 } 00:09:40.448 ] 00:09:40.448 }' 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.448 03:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.044 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:41.044 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:41.044 03:09:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.044 03:09:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.044 [2024-11-18 03:09:44.380798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:41.044 [2024-11-18 03:09:44.380933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.044 [2024-11-18 03:09:44.380959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:41.044 [2024-11-18 03:09:44.380990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.044 [2024-11-18 03:09:44.381434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.044 [2024-11-18 03:09:44.381457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:41.044 [2024-11-18 03:09:44.381532] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:41.044 [2024-11-18 03:09:44.381556] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:41.044 pt2 00:09:41.044 03:09:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.044 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:41.044 03:09:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.044 03:09:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.044 [2024-11-18 03:09:44.392772] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:41.044 03:09:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.044 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:41.044 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.044 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.044 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.044 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.044 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.044 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.044 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.044 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.044 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.044 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.044 03:09:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.044 03:09:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.044 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.044 03:09:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.044 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.044 "name": "raid_bdev1", 00:09:41.044 "uuid": "8397b232-4fc0-45f0-b066-8951a2284079", 00:09:41.044 "strip_size_kb": 0, 00:09:41.044 "state": "configuring", 00:09:41.044 "raid_level": "raid1", 00:09:41.044 "superblock": true, 00:09:41.044 "num_base_bdevs": 3, 00:09:41.045 "num_base_bdevs_discovered": 1, 00:09:41.045 "num_base_bdevs_operational": 3, 00:09:41.045 "base_bdevs_list": [ 00:09:41.045 { 00:09:41.045 "name": "pt1", 00:09:41.045 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:41.045 "is_configured": true, 00:09:41.045 "data_offset": 2048, 00:09:41.045 "data_size": 63488 00:09:41.045 }, 00:09:41.045 { 00:09:41.045 "name": null, 00:09:41.045 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.045 "is_configured": false, 00:09:41.045 "data_offset": 0, 00:09:41.045 "data_size": 63488 00:09:41.045 }, 00:09:41.045 { 00:09:41.045 "name": null, 00:09:41.045 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:41.045 "is_configured": false, 00:09:41.045 "data_offset": 2048, 00:09:41.045 "data_size": 63488 00:09:41.045 } 00:09:41.045 ] 00:09:41.045 }' 00:09:41.045 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.045 03:09:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.311 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:41.311 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:41.311 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:41.311 03:09:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.311 03:09:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.571 [2024-11-18 03:09:44.891998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:41.571 [2024-11-18 03:09:44.892116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.571 [2024-11-18 03:09:44.892153] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:41.571 [2024-11-18 03:09:44.892203] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.571 [2024-11-18 03:09:44.892661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.571 [2024-11-18 03:09:44.892722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:41.571 [2024-11-18 03:09:44.892835] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:41.571 [2024-11-18 03:09:44.892896] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:41.571 pt2 00:09:41.571 03:09:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.571 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:41.571 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:41.571 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:41.571 03:09:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.571 03:09:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.571 [2024-11-18 03:09:44.903928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:41.571 [2024-11-18 03:09:44.904029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.571 [2024-11-18 03:09:44.904065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:41.571 [2024-11-18 03:09:44.904114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.571 [2024-11-18 03:09:44.904501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.571 [2024-11-18 03:09:44.904555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:41.571 [2024-11-18 03:09:44.904649] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:41.571 [2024-11-18 03:09:44.904699] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:41.571 [2024-11-18 03:09:44.904824] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:41.571 [2024-11-18 03:09:44.904866] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:41.571 [2024-11-18 03:09:44.905119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:41.571 [2024-11-18 03:09:44.905273] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:41.571 [2024-11-18 03:09:44.905318] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:41.571 [2024-11-18 03:09:44.905455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.571 pt3 00:09:41.571 03:09:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.571 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:41.571 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:41.571 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:41.571 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.571 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.571 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.571 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.571 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.571 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.571 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.571 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.571 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.571 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.571 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.571 03:09:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.571 03:09:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.571 03:09:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.571 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.571 "name": "raid_bdev1", 00:09:41.571 "uuid": "8397b232-4fc0-45f0-b066-8951a2284079", 00:09:41.571 "strip_size_kb": 0, 00:09:41.571 "state": "online", 00:09:41.571 "raid_level": "raid1", 00:09:41.571 "superblock": true, 00:09:41.571 "num_base_bdevs": 3, 00:09:41.571 "num_base_bdevs_discovered": 3, 00:09:41.571 "num_base_bdevs_operational": 3, 00:09:41.571 "base_bdevs_list": [ 00:09:41.571 { 00:09:41.571 "name": "pt1", 00:09:41.571 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:41.571 "is_configured": true, 00:09:41.571 "data_offset": 2048, 00:09:41.571 "data_size": 63488 00:09:41.571 }, 00:09:41.571 { 00:09:41.571 "name": "pt2", 00:09:41.571 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.571 "is_configured": true, 00:09:41.571 "data_offset": 2048, 00:09:41.571 "data_size": 63488 00:09:41.571 }, 00:09:41.571 { 00:09:41.571 "name": "pt3", 00:09:41.571 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:41.571 "is_configured": true, 00:09:41.571 "data_offset": 2048, 00:09:41.571 "data_size": 63488 00:09:41.571 } 00:09:41.571 ] 00:09:41.572 }' 00:09:41.572 03:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.572 03:09:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.832 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:41.832 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:41.832 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:41.832 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:41.832 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:41.832 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:41.832 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:41.832 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:41.832 03:09:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.832 03:09:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.832 [2024-11-18 03:09:45.335557] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.832 03:09:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.832 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:41.832 "name": "raid_bdev1", 00:09:41.832 "aliases": [ 00:09:41.833 "8397b232-4fc0-45f0-b066-8951a2284079" 00:09:41.833 ], 00:09:41.833 "product_name": "Raid Volume", 00:09:41.833 "block_size": 512, 00:09:41.833 "num_blocks": 63488, 00:09:41.833 "uuid": "8397b232-4fc0-45f0-b066-8951a2284079", 00:09:41.833 "assigned_rate_limits": { 00:09:41.833 "rw_ios_per_sec": 0, 00:09:41.833 "rw_mbytes_per_sec": 0, 00:09:41.833 "r_mbytes_per_sec": 0, 00:09:41.833 "w_mbytes_per_sec": 0 00:09:41.833 }, 00:09:41.833 "claimed": false, 00:09:41.833 "zoned": false, 00:09:41.833 "supported_io_types": { 00:09:41.833 "read": true, 00:09:41.833 "write": true, 00:09:41.833 "unmap": false, 00:09:41.833 "flush": false, 00:09:41.833 "reset": true, 00:09:41.833 "nvme_admin": false, 00:09:41.833 "nvme_io": false, 00:09:41.833 "nvme_io_md": false, 00:09:41.833 "write_zeroes": true, 00:09:41.833 "zcopy": false, 00:09:41.833 "get_zone_info": false, 00:09:41.833 "zone_management": false, 00:09:41.833 "zone_append": false, 00:09:41.833 "compare": false, 00:09:41.833 "compare_and_write": false, 00:09:41.833 "abort": false, 00:09:41.833 "seek_hole": false, 00:09:41.833 "seek_data": false, 00:09:41.833 "copy": false, 00:09:41.833 "nvme_iov_md": false 00:09:41.833 }, 00:09:41.833 "memory_domains": [ 00:09:41.833 { 00:09:41.833 "dma_device_id": "system", 00:09:41.833 "dma_device_type": 1 00:09:41.833 }, 00:09:41.833 { 00:09:41.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.833 "dma_device_type": 2 00:09:41.833 }, 00:09:41.833 { 00:09:41.833 "dma_device_id": "system", 00:09:41.833 "dma_device_type": 1 00:09:41.833 }, 00:09:41.833 { 00:09:41.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.833 "dma_device_type": 2 00:09:41.833 }, 00:09:41.833 { 00:09:41.833 "dma_device_id": "system", 00:09:41.833 "dma_device_type": 1 00:09:41.833 }, 00:09:41.833 { 00:09:41.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.833 "dma_device_type": 2 00:09:41.833 } 00:09:41.833 ], 00:09:41.833 "driver_specific": { 00:09:41.833 "raid": { 00:09:41.833 "uuid": "8397b232-4fc0-45f0-b066-8951a2284079", 00:09:41.833 "strip_size_kb": 0, 00:09:41.833 "state": "online", 00:09:41.833 "raid_level": "raid1", 00:09:41.833 "superblock": true, 00:09:41.833 "num_base_bdevs": 3, 00:09:41.833 "num_base_bdevs_discovered": 3, 00:09:41.833 "num_base_bdevs_operational": 3, 00:09:41.833 "base_bdevs_list": [ 00:09:41.833 { 00:09:41.833 "name": "pt1", 00:09:41.833 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:41.833 "is_configured": true, 00:09:41.833 "data_offset": 2048, 00:09:41.833 "data_size": 63488 00:09:41.833 }, 00:09:41.833 { 00:09:41.833 "name": "pt2", 00:09:41.833 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.833 "is_configured": true, 00:09:41.833 "data_offset": 2048, 00:09:41.833 "data_size": 63488 00:09:41.833 }, 00:09:41.833 { 00:09:41.833 "name": "pt3", 00:09:41.833 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:41.833 "is_configured": true, 00:09:41.833 "data_offset": 2048, 00:09:41.833 "data_size": 63488 00:09:41.833 } 00:09:41.833 ] 00:09:41.833 } 00:09:41.833 } 00:09:41.833 }' 00:09:41.833 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:41.833 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:41.833 pt2 00:09:41.833 pt3' 00:09:41.833 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.093 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:42.093 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.093 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:42.093 03:09:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.093 03:09:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.093 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.093 03:09:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.093 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.094 [2024-11-18 03:09:45.595030] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8397b232-4fc0-45f0-b066-8951a2284079 '!=' 8397b232-4fc0-45f0-b066-8951a2284079 ']' 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.094 [2024-11-18 03:09:45.642683] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.094 03:09:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.354 03:09:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.354 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.354 "name": "raid_bdev1", 00:09:42.354 "uuid": "8397b232-4fc0-45f0-b066-8951a2284079", 00:09:42.354 "strip_size_kb": 0, 00:09:42.354 "state": "online", 00:09:42.354 "raid_level": "raid1", 00:09:42.354 "superblock": true, 00:09:42.354 "num_base_bdevs": 3, 00:09:42.354 "num_base_bdevs_discovered": 2, 00:09:42.354 "num_base_bdevs_operational": 2, 00:09:42.354 "base_bdevs_list": [ 00:09:42.354 { 00:09:42.354 "name": null, 00:09:42.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.354 "is_configured": false, 00:09:42.354 "data_offset": 0, 00:09:42.354 "data_size": 63488 00:09:42.354 }, 00:09:42.354 { 00:09:42.354 "name": "pt2", 00:09:42.354 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:42.354 "is_configured": true, 00:09:42.354 "data_offset": 2048, 00:09:42.354 "data_size": 63488 00:09:42.354 }, 00:09:42.354 { 00:09:42.354 "name": "pt3", 00:09:42.354 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:42.354 "is_configured": true, 00:09:42.354 "data_offset": 2048, 00:09:42.354 "data_size": 63488 00:09:42.354 } 00:09:42.354 ] 00:09:42.354 }' 00:09:42.354 03:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.354 03:09:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.614 [2024-11-18 03:09:46.077938] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:42.614 [2024-11-18 03:09:46.078039] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:42.614 [2024-11-18 03:09:46.078160] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.614 [2024-11-18 03:09:46.078245] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:42.614 [2024-11-18 03:09:46.078302] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.614 [2024-11-18 03:09:46.149793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:42.614 [2024-11-18 03:09:46.149887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.614 [2024-11-18 03:09:46.149910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:42.614 [2024-11-18 03:09:46.149919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.614 [2024-11-18 03:09:46.152157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.614 [2024-11-18 03:09:46.152231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:42.614 [2024-11-18 03:09:46.152332] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:42.614 [2024-11-18 03:09:46.152392] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:42.614 pt2 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.614 03:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.873 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.873 "name": "raid_bdev1", 00:09:42.873 "uuid": "8397b232-4fc0-45f0-b066-8951a2284079", 00:09:42.873 "strip_size_kb": 0, 00:09:42.873 "state": "configuring", 00:09:42.873 "raid_level": "raid1", 00:09:42.873 "superblock": true, 00:09:42.873 "num_base_bdevs": 3, 00:09:42.873 "num_base_bdevs_discovered": 1, 00:09:42.873 "num_base_bdevs_operational": 2, 00:09:42.873 "base_bdevs_list": [ 00:09:42.873 { 00:09:42.873 "name": null, 00:09:42.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.873 "is_configured": false, 00:09:42.873 "data_offset": 2048, 00:09:42.873 "data_size": 63488 00:09:42.873 }, 00:09:42.873 { 00:09:42.873 "name": "pt2", 00:09:42.873 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:42.873 "is_configured": true, 00:09:42.873 "data_offset": 2048, 00:09:42.873 "data_size": 63488 00:09:42.873 }, 00:09:42.873 { 00:09:42.873 "name": null, 00:09:42.873 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:42.873 "is_configured": false, 00:09:42.873 "data_offset": 2048, 00:09:42.873 "data_size": 63488 00:09:42.873 } 00:09:42.873 ] 00:09:42.873 }' 00:09:42.873 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.873 03:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.134 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:43.134 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:43.134 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:43.134 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:43.134 03:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.134 03:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.134 [2024-11-18 03:09:46.533201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:43.134 [2024-11-18 03:09:46.533341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.134 [2024-11-18 03:09:46.533386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:43.134 [2024-11-18 03:09:46.533419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.134 [2024-11-18 03:09:46.533864] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.134 [2024-11-18 03:09:46.533925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:43.134 [2024-11-18 03:09:46.534047] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:43.134 [2024-11-18 03:09:46.534103] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:43.134 [2024-11-18 03:09:46.534233] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:43.134 [2024-11-18 03:09:46.534274] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:43.134 [2024-11-18 03:09:46.534564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:43.134 [2024-11-18 03:09:46.534728] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:43.134 [2024-11-18 03:09:46.534772] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:43.134 [2024-11-18 03:09:46.534922] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.134 pt3 00:09:43.134 03:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.134 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:43.134 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.134 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.134 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.134 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.134 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:43.134 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.134 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.134 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.134 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.134 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.134 03:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.134 03:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.134 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.134 03:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.134 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.134 "name": "raid_bdev1", 00:09:43.134 "uuid": "8397b232-4fc0-45f0-b066-8951a2284079", 00:09:43.134 "strip_size_kb": 0, 00:09:43.134 "state": "online", 00:09:43.134 "raid_level": "raid1", 00:09:43.134 "superblock": true, 00:09:43.134 "num_base_bdevs": 3, 00:09:43.134 "num_base_bdevs_discovered": 2, 00:09:43.134 "num_base_bdevs_operational": 2, 00:09:43.134 "base_bdevs_list": [ 00:09:43.134 { 00:09:43.134 "name": null, 00:09:43.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.134 "is_configured": false, 00:09:43.134 "data_offset": 2048, 00:09:43.134 "data_size": 63488 00:09:43.134 }, 00:09:43.134 { 00:09:43.134 "name": "pt2", 00:09:43.134 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:43.134 "is_configured": true, 00:09:43.134 "data_offset": 2048, 00:09:43.134 "data_size": 63488 00:09:43.134 }, 00:09:43.134 { 00:09:43.134 "name": "pt3", 00:09:43.134 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:43.134 "is_configured": true, 00:09:43.134 "data_offset": 2048, 00:09:43.134 "data_size": 63488 00:09:43.134 } 00:09:43.134 ] 00:09:43.134 }' 00:09:43.134 03:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.134 03:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.705 [2024-11-18 03:09:47.016342] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:43.705 [2024-11-18 03:09:47.016429] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:43.705 [2024-11-18 03:09:47.016545] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:43.705 [2024-11-18 03:09:47.016621] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:43.705 [2024-11-18 03:09:47.016679] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.705 [2024-11-18 03:09:47.088191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:43.705 [2024-11-18 03:09:47.088299] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.705 [2024-11-18 03:09:47.088332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:43.705 [2024-11-18 03:09:47.088366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.705 [2024-11-18 03:09:47.090562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.705 [2024-11-18 03:09:47.090636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:43.705 [2024-11-18 03:09:47.090730] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:43.705 [2024-11-18 03:09:47.090788] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:43.705 [2024-11-18 03:09:47.090915] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:43.705 [2024-11-18 03:09:47.090985] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:43.705 [2024-11-18 03:09:47.091048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:09:43.705 [2024-11-18 03:09:47.091129] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:43.705 pt1 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.705 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.705 "name": "raid_bdev1", 00:09:43.705 "uuid": "8397b232-4fc0-45f0-b066-8951a2284079", 00:09:43.705 "strip_size_kb": 0, 00:09:43.705 "state": "configuring", 00:09:43.705 "raid_level": "raid1", 00:09:43.705 "superblock": true, 00:09:43.705 "num_base_bdevs": 3, 00:09:43.705 "num_base_bdevs_discovered": 1, 00:09:43.705 "num_base_bdevs_operational": 2, 00:09:43.705 "base_bdevs_list": [ 00:09:43.705 { 00:09:43.705 "name": null, 00:09:43.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.705 "is_configured": false, 00:09:43.705 "data_offset": 2048, 00:09:43.705 "data_size": 63488 00:09:43.705 }, 00:09:43.705 { 00:09:43.705 "name": "pt2", 00:09:43.705 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:43.705 "is_configured": true, 00:09:43.705 "data_offset": 2048, 00:09:43.705 "data_size": 63488 00:09:43.705 }, 00:09:43.705 { 00:09:43.705 "name": null, 00:09:43.706 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:43.706 "is_configured": false, 00:09:43.706 "data_offset": 2048, 00:09:43.706 "data_size": 63488 00:09:43.706 } 00:09:43.706 ] 00:09:43.706 }' 00:09:43.706 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.706 03:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.965 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:43.965 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:43.965 03:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.965 03:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.965 03:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.224 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:44.224 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:44.224 03:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.224 03:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.224 [2024-11-18 03:09:47.563385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:44.224 [2024-11-18 03:09:47.563497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.224 [2024-11-18 03:09:47.563532] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:44.224 [2024-11-18 03:09:47.563563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.224 [2024-11-18 03:09:47.563991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.224 [2024-11-18 03:09:47.564052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:44.224 [2024-11-18 03:09:47.564152] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:44.225 [2024-11-18 03:09:47.564226] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:44.225 [2024-11-18 03:09:47.564353] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:09:44.225 [2024-11-18 03:09:47.564393] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:44.225 [2024-11-18 03:09:47.564628] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:44.225 [2024-11-18 03:09:47.564788] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:09:44.225 [2024-11-18 03:09:47.564829] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:09:44.225 [2024-11-18 03:09:47.564979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.225 pt3 00:09:44.225 03:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.225 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:44.225 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.225 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.225 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.225 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.225 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:44.225 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.225 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.225 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.225 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.225 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.225 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.225 03:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.225 03:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.225 03:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.225 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.225 "name": "raid_bdev1", 00:09:44.225 "uuid": "8397b232-4fc0-45f0-b066-8951a2284079", 00:09:44.225 "strip_size_kb": 0, 00:09:44.225 "state": "online", 00:09:44.225 "raid_level": "raid1", 00:09:44.225 "superblock": true, 00:09:44.225 "num_base_bdevs": 3, 00:09:44.225 "num_base_bdevs_discovered": 2, 00:09:44.225 "num_base_bdevs_operational": 2, 00:09:44.225 "base_bdevs_list": [ 00:09:44.225 { 00:09:44.225 "name": null, 00:09:44.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.225 "is_configured": false, 00:09:44.225 "data_offset": 2048, 00:09:44.225 "data_size": 63488 00:09:44.225 }, 00:09:44.225 { 00:09:44.225 "name": "pt2", 00:09:44.225 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.225 "is_configured": true, 00:09:44.225 "data_offset": 2048, 00:09:44.225 "data_size": 63488 00:09:44.225 }, 00:09:44.225 { 00:09:44.225 "name": "pt3", 00:09:44.225 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.225 "is_configured": true, 00:09:44.225 "data_offset": 2048, 00:09:44.225 "data_size": 63488 00:09:44.225 } 00:09:44.225 ] 00:09:44.225 }' 00:09:44.225 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.225 03:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.485 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:44.485 03:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:44.485 03:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.485 03:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.485 03:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.485 03:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:44.485 03:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:44.485 03:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:44.485 03:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.485 03:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.485 [2024-11-18 03:09:48.047016] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.745 03:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.745 03:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 8397b232-4fc0-45f0-b066-8951a2284079 '!=' 8397b232-4fc0-45f0-b066-8951a2284079 ']' 00:09:44.745 03:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79814 00:09:44.745 03:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 79814 ']' 00:09:44.745 03:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 79814 00:09:44.745 03:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:44.745 03:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:44.745 03:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79814 00:09:44.745 killing process with pid 79814 00:09:44.745 03:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:44.745 03:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:44.745 03:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79814' 00:09:44.745 03:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 79814 00:09:44.745 [2024-11-18 03:09:48.125776] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:44.745 [2024-11-18 03:09:48.125885] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.745 03:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 79814 00:09:44.745 [2024-11-18 03:09:48.125957] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:44.745 [2024-11-18 03:09:48.125968] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:09:44.745 [2024-11-18 03:09:48.160621] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:45.005 03:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:45.005 00:09:45.005 real 0m6.502s 00:09:45.005 user 0m10.905s 00:09:45.005 sys 0m1.329s 00:09:45.005 ************************************ 00:09:45.005 END TEST raid_superblock_test 00:09:45.005 ************************************ 00:09:45.006 03:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.006 03:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.006 03:09:48 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:45.006 03:09:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:45.006 03:09:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:45.006 03:09:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:45.006 ************************************ 00:09:45.006 START TEST raid_read_error_test 00:09:45.006 ************************************ 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.vcxNaOZwk1 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80243 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80243 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 80243 ']' 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:45.006 03:09:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.265 [2024-11-18 03:09:48.582136] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:45.265 [2024-11-18 03:09:48.582762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80243 ] 00:09:45.265 [2024-11-18 03:09:48.724074] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.265 [2024-11-18 03:09:48.774249] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.265 [2024-11-18 03:09:48.817475] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.265 [2024-11-18 03:09:48.817507] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.206 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:46.206 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:46.206 03:09:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:46.206 03:09:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:46.206 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.206 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.206 BaseBdev1_malloc 00:09:46.206 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.206 03:09:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:46.206 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.206 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.206 true 00:09:46.206 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.206 03:09:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:46.206 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.206 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.206 [2024-11-18 03:09:49.468312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:46.206 [2024-11-18 03:09:49.468456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.206 [2024-11-18 03:09:49.468500] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:46.206 [2024-11-18 03:09:49.468530] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.206 [2024-11-18 03:09:49.470704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.206 [2024-11-18 03:09:49.470778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:46.206 BaseBdev1 00:09:46.206 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.206 03:09:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:46.206 03:09:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:46.206 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.206 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.206 BaseBdev2_malloc 00:09:46.206 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.206 03:09:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.207 true 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.207 [2024-11-18 03:09:49.520198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:46.207 [2024-11-18 03:09:49.520258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.207 [2024-11-18 03:09:49.520279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:46.207 [2024-11-18 03:09:49.520289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.207 [2024-11-18 03:09:49.522539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.207 [2024-11-18 03:09:49.522578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:46.207 BaseBdev2 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.207 BaseBdev3_malloc 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.207 true 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.207 [2024-11-18 03:09:49.561094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:46.207 [2024-11-18 03:09:49.561198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.207 [2024-11-18 03:09:49.561241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:46.207 [2024-11-18 03:09:49.561251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.207 [2024-11-18 03:09:49.563409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.207 [2024-11-18 03:09:49.566243] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:46.207 BaseBdev3 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.207 [2024-11-18 03:09:49.574432] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:46.207 [2024-11-18 03:09:49.576462] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:46.207 [2024-11-18 03:09:49.576606] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:46.207 [2024-11-18 03:09:49.576808] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:46.207 [2024-11-18 03:09:49.576866] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:46.207 [2024-11-18 03:09:49.577150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:46.207 [2024-11-18 03:09:49.577348] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:46.207 [2024-11-18 03:09:49.577393] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:46.207 [2024-11-18 03:09:49.577575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.207 "name": "raid_bdev1", 00:09:46.207 "uuid": "1ab096b1-cf4c-42d8-b25d-d999219368f5", 00:09:46.207 "strip_size_kb": 0, 00:09:46.207 "state": "online", 00:09:46.207 "raid_level": "raid1", 00:09:46.207 "superblock": true, 00:09:46.207 "num_base_bdevs": 3, 00:09:46.207 "num_base_bdevs_discovered": 3, 00:09:46.207 "num_base_bdevs_operational": 3, 00:09:46.207 "base_bdevs_list": [ 00:09:46.207 { 00:09:46.207 "name": "BaseBdev1", 00:09:46.207 "uuid": "b415b61c-0272-505d-a5f0-afa8b7aab58a", 00:09:46.207 "is_configured": true, 00:09:46.207 "data_offset": 2048, 00:09:46.207 "data_size": 63488 00:09:46.207 }, 00:09:46.207 { 00:09:46.207 "name": "BaseBdev2", 00:09:46.207 "uuid": "89973962-e976-58cb-a8ab-3c354a6be6a0", 00:09:46.207 "is_configured": true, 00:09:46.207 "data_offset": 2048, 00:09:46.207 "data_size": 63488 00:09:46.207 }, 00:09:46.207 { 00:09:46.207 "name": "BaseBdev3", 00:09:46.207 "uuid": "cc74dfde-3347-5945-938f-b5076920e0bd", 00:09:46.207 "is_configured": true, 00:09:46.207 "data_offset": 2048, 00:09:46.207 "data_size": 63488 00:09:46.207 } 00:09:46.207 ] 00:09:46.207 }' 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.207 03:09:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.468 03:09:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:46.468 03:09:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:46.727 [2024-11-18 03:09:50.121933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:47.667 03:09:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:47.667 03:09:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.667 03:09:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.667 03:09:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.667 03:09:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:47.667 03:09:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:47.667 03:09:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:47.667 03:09:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:47.667 03:09:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:47.667 03:09:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.667 03:09:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.667 03:09:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.667 03:09:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.667 03:09:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.667 03:09:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.667 03:09:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.667 03:09:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.667 03:09:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.668 03:09:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.668 03:09:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.668 03:09:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.668 03:09:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.668 03:09:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.668 03:09:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.668 "name": "raid_bdev1", 00:09:47.668 "uuid": "1ab096b1-cf4c-42d8-b25d-d999219368f5", 00:09:47.668 "strip_size_kb": 0, 00:09:47.668 "state": "online", 00:09:47.668 "raid_level": "raid1", 00:09:47.668 "superblock": true, 00:09:47.668 "num_base_bdevs": 3, 00:09:47.668 "num_base_bdevs_discovered": 3, 00:09:47.668 "num_base_bdevs_operational": 3, 00:09:47.668 "base_bdevs_list": [ 00:09:47.668 { 00:09:47.668 "name": "BaseBdev1", 00:09:47.668 "uuid": "b415b61c-0272-505d-a5f0-afa8b7aab58a", 00:09:47.668 "is_configured": true, 00:09:47.668 "data_offset": 2048, 00:09:47.668 "data_size": 63488 00:09:47.668 }, 00:09:47.668 { 00:09:47.668 "name": "BaseBdev2", 00:09:47.668 "uuid": "89973962-e976-58cb-a8ab-3c354a6be6a0", 00:09:47.668 "is_configured": true, 00:09:47.668 "data_offset": 2048, 00:09:47.668 "data_size": 63488 00:09:47.668 }, 00:09:47.668 { 00:09:47.668 "name": "BaseBdev3", 00:09:47.668 "uuid": "cc74dfde-3347-5945-938f-b5076920e0bd", 00:09:47.668 "is_configured": true, 00:09:47.668 "data_offset": 2048, 00:09:47.668 "data_size": 63488 00:09:47.668 } 00:09:47.668 ] 00:09:47.668 }' 00:09:47.668 03:09:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.668 03:09:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.928 03:09:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:47.928 03:09:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.928 03:09:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.928 [2024-11-18 03:09:51.468649] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:47.928 [2024-11-18 03:09:51.468742] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.928 [2024-11-18 03:09:51.471462] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.928 [2024-11-18 03:09:51.471554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.928 [2024-11-18 03:09:51.471689] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.928 [2024-11-18 03:09:51.471757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:47.928 { 00:09:47.928 "results": [ 00:09:47.928 { 00:09:47.928 "job": "raid_bdev1", 00:09:47.928 "core_mask": "0x1", 00:09:47.928 "workload": "randrw", 00:09:47.928 "percentage": 50, 00:09:47.928 "status": "finished", 00:09:47.928 "queue_depth": 1, 00:09:47.928 "io_size": 131072, 00:09:47.928 "runtime": 1.34706, 00:09:47.928 "iops": 13699.464017935356, 00:09:47.928 "mibps": 1712.4330022419194, 00:09:47.928 "io_failed": 0, 00:09:47.928 "io_timeout": 0, 00:09:47.928 "avg_latency_us": 70.32749132387718, 00:09:47.928 "min_latency_us": 23.699563318777294, 00:09:47.928 "max_latency_us": 1502.46288209607 00:09:47.928 } 00:09:47.928 ], 00:09:47.928 "core_count": 1 00:09:47.928 } 00:09:47.928 03:09:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.928 03:09:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80243 00:09:47.928 03:09:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 80243 ']' 00:09:47.928 03:09:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 80243 00:09:47.928 03:09:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:47.928 03:09:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:47.928 03:09:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80243 00:09:48.188 killing process with pid 80243 00:09:48.188 03:09:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:48.188 03:09:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:48.188 03:09:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80243' 00:09:48.188 03:09:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 80243 00:09:48.188 [2024-11-18 03:09:51.516162] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:48.188 03:09:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 80243 00:09:48.188 [2024-11-18 03:09:51.542268] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:48.448 03:09:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:48.448 03:09:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.vcxNaOZwk1 00:09:48.448 03:09:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:48.448 03:09:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:48.448 03:09:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:48.448 ************************************ 00:09:48.448 END TEST raid_read_error_test 00:09:48.448 ************************************ 00:09:48.448 03:09:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:48.448 03:09:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:48.448 03:09:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:48.448 00:09:48.448 real 0m3.310s 00:09:48.448 user 0m4.191s 00:09:48.448 sys 0m0.524s 00:09:48.448 03:09:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:48.448 03:09:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.448 03:09:51 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:48.448 03:09:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:48.448 03:09:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:48.448 03:09:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:48.448 ************************************ 00:09:48.448 START TEST raid_write_error_test 00:09:48.448 ************************************ 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.HKSg8QbKBd 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80378 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80378 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 80378 ']' 00:09:48.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:48.448 03:09:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.448 [2024-11-18 03:09:51.959404] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:48.448 [2024-11-18 03:09:51.959547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80378 ] 00:09:48.709 [2024-11-18 03:09:52.121238] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.709 [2024-11-18 03:09:52.171709] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.709 [2024-11-18 03:09:52.214477] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.709 [2024-11-18 03:09:52.214516] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.279 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:49.279 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:49.279 03:09:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.279 03:09:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:49.279 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.279 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.279 BaseBdev1_malloc 00:09:49.279 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.279 03:09:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:49.279 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.279 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.279 true 00:09:49.279 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.279 03:09:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:49.279 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.279 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.279 [2024-11-18 03:09:52.833182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:49.279 [2024-11-18 03:09:52.833281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.280 [2024-11-18 03:09:52.833335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:49.280 [2024-11-18 03:09:52.833364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.280 [2024-11-18 03:09:52.835521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.280 [2024-11-18 03:09:52.835597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:49.280 BaseBdev1 00:09:49.280 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.280 03:09:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.280 03:09:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:49.280 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.280 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.541 BaseBdev2_malloc 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.541 true 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.541 [2024-11-18 03:09:52.882464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:49.541 [2024-11-18 03:09:52.882580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.541 [2024-11-18 03:09:52.882617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:49.541 [2024-11-18 03:09:52.882645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.541 [2024-11-18 03:09:52.884849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.541 [2024-11-18 03:09:52.884925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:49.541 BaseBdev2 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.541 BaseBdev3_malloc 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.541 true 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.541 [2024-11-18 03:09:52.923287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:49.541 [2024-11-18 03:09:52.923384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.541 [2024-11-18 03:09:52.923421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:49.541 [2024-11-18 03:09:52.923450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.541 [2024-11-18 03:09:52.925610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.541 [2024-11-18 03:09:52.925681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:49.541 BaseBdev3 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.541 [2024-11-18 03:09:52.935324] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:49.541 [2024-11-18 03:09:52.937334] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:49.541 [2024-11-18 03:09:52.937470] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:49.541 [2024-11-18 03:09:52.937647] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:49.541 [2024-11-18 03:09:52.937662] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:49.541 [2024-11-18 03:09:52.937915] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:49.541 [2024-11-18 03:09:52.938099] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:49.541 [2024-11-18 03:09:52.938110] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:49.541 [2024-11-18 03:09:52.938239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.541 03:09:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.541 "name": "raid_bdev1", 00:09:49.541 "uuid": "2dac09c6-ab7c-4944-b846-23eb19037261", 00:09:49.541 "strip_size_kb": 0, 00:09:49.541 "state": "online", 00:09:49.541 "raid_level": "raid1", 00:09:49.541 "superblock": true, 00:09:49.541 "num_base_bdevs": 3, 00:09:49.541 "num_base_bdevs_discovered": 3, 00:09:49.541 "num_base_bdevs_operational": 3, 00:09:49.541 "base_bdevs_list": [ 00:09:49.541 { 00:09:49.541 "name": "BaseBdev1", 00:09:49.541 "uuid": "7c2014fc-6bff-55ef-a2f1-510e0f294895", 00:09:49.541 "is_configured": true, 00:09:49.541 "data_offset": 2048, 00:09:49.541 "data_size": 63488 00:09:49.541 }, 00:09:49.541 { 00:09:49.541 "name": "BaseBdev2", 00:09:49.541 "uuid": "0eb4fe54-5293-54b9-a55a-275e16d3c7ef", 00:09:49.541 "is_configured": true, 00:09:49.541 "data_offset": 2048, 00:09:49.541 "data_size": 63488 00:09:49.541 }, 00:09:49.541 { 00:09:49.541 "name": "BaseBdev3", 00:09:49.541 "uuid": "a4c74e72-1417-5e5d-b08b-4917a6a186ac", 00:09:49.541 "is_configured": true, 00:09:49.542 "data_offset": 2048, 00:09:49.542 "data_size": 63488 00:09:49.542 } 00:09:49.542 ] 00:09:49.542 }' 00:09:49.542 03:09:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.542 03:09:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.802 03:09:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:49.802 03:09:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:50.061 [2024-11-18 03:09:53.438935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:51.032 03:09:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:51.032 03:09:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.032 03:09:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.032 [2024-11-18 03:09:54.350465] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:51.032 [2024-11-18 03:09:54.350618] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:51.032 [2024-11-18 03:09:54.350857] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:09:51.032 03:09:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.032 03:09:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:51.032 03:09:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:51.032 03:09:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:51.032 03:09:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:51.032 03:09:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:51.032 03:09:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.032 03:09:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:51.032 03:09:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.032 03:09:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.032 03:09:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:51.032 03:09:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.032 03:09:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.032 03:09:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.032 03:09:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.032 03:09:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.032 03:09:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.032 03:09:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.032 03:09:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.032 03:09:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.032 03:09:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.032 "name": "raid_bdev1", 00:09:51.032 "uuid": "2dac09c6-ab7c-4944-b846-23eb19037261", 00:09:51.032 "strip_size_kb": 0, 00:09:51.032 "state": "online", 00:09:51.032 "raid_level": "raid1", 00:09:51.032 "superblock": true, 00:09:51.032 "num_base_bdevs": 3, 00:09:51.032 "num_base_bdevs_discovered": 2, 00:09:51.032 "num_base_bdevs_operational": 2, 00:09:51.032 "base_bdevs_list": [ 00:09:51.032 { 00:09:51.032 "name": null, 00:09:51.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.032 "is_configured": false, 00:09:51.032 "data_offset": 0, 00:09:51.032 "data_size": 63488 00:09:51.032 }, 00:09:51.032 { 00:09:51.032 "name": "BaseBdev2", 00:09:51.032 "uuid": "0eb4fe54-5293-54b9-a55a-275e16d3c7ef", 00:09:51.032 "is_configured": true, 00:09:51.032 "data_offset": 2048, 00:09:51.032 "data_size": 63488 00:09:51.032 }, 00:09:51.032 { 00:09:51.032 "name": "BaseBdev3", 00:09:51.032 "uuid": "a4c74e72-1417-5e5d-b08b-4917a6a186ac", 00:09:51.032 "is_configured": true, 00:09:51.032 "data_offset": 2048, 00:09:51.032 "data_size": 63488 00:09:51.032 } 00:09:51.032 ] 00:09:51.032 }' 00:09:51.032 03:09:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.032 03:09:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.293 03:09:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:51.293 03:09:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.293 03:09:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.293 [2024-11-18 03:09:54.804660] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:51.293 [2024-11-18 03:09:54.804757] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.293 [2024-11-18 03:09:54.807736] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.293 [2024-11-18 03:09:54.807830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.293 [2024-11-18 03:09:54.807957] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.293 [2024-11-18 03:09:54.808034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:51.293 { 00:09:51.293 "results": [ 00:09:51.293 { 00:09:51.293 "job": "raid_bdev1", 00:09:51.293 "core_mask": "0x1", 00:09:51.293 "workload": "randrw", 00:09:51.293 "percentage": 50, 00:09:51.293 "status": "finished", 00:09:51.293 "queue_depth": 1, 00:09:51.293 "io_size": 131072, 00:09:51.293 "runtime": 1.366425, 00:09:51.293 "iops": 15401.50392447445, 00:09:51.293 "mibps": 1925.1879905593062, 00:09:51.293 "io_failed": 0, 00:09:51.293 "io_timeout": 0, 00:09:51.293 "avg_latency_us": 62.27296724320208, 00:09:51.293 "min_latency_us": 23.36419213973799, 00:09:51.293 "max_latency_us": 1552.5449781659388 00:09:51.293 } 00:09:51.293 ], 00:09:51.293 "core_count": 1 00:09:51.293 } 00:09:51.293 03:09:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.293 03:09:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80378 00:09:51.293 03:09:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 80378 ']' 00:09:51.293 03:09:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 80378 00:09:51.293 03:09:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:51.293 03:09:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:51.293 03:09:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80378 00:09:51.293 killing process with pid 80378 00:09:51.293 03:09:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:51.293 03:09:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:51.293 03:09:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80378' 00:09:51.293 03:09:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 80378 00:09:51.293 [2024-11-18 03:09:54.847523] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:51.293 03:09:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 80378 00:09:51.553 [2024-11-18 03:09:54.873697] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:51.553 03:09:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:51.553 03:09:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:51.553 03:09:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.HKSg8QbKBd 00:09:51.553 03:09:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:51.553 03:09:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:51.553 03:09:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:51.553 03:09:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:51.553 03:09:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:51.553 00:09:51.553 real 0m3.261s 00:09:51.553 user 0m4.095s 00:09:51.553 sys 0m0.531s 00:09:51.553 ************************************ 00:09:51.553 END TEST raid_write_error_test 00:09:51.553 ************************************ 00:09:51.553 03:09:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:51.553 03:09:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.814 03:09:55 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:51.814 03:09:55 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:51.814 03:09:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:51.814 03:09:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:51.814 03:09:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:51.814 03:09:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:51.814 ************************************ 00:09:51.814 START TEST raid_state_function_test 00:09:51.814 ************************************ 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80505 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80505' 00:09:51.814 Process raid pid: 80505 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80505 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 80505 ']' 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:51.814 03:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.814 [2024-11-18 03:09:55.289813] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:51.814 [2024-11-18 03:09:55.290062] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.074 [2024-11-18 03:09:55.452211] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.074 [2024-11-18 03:09:55.503583] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.074 [2024-11-18 03:09:55.547355] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.074 [2024-11-18 03:09:55.547458] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.644 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:52.644 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:52.644 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:52.644 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.644 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.644 [2024-11-18 03:09:56.133197] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:52.644 [2024-11-18 03:09:56.133297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:52.644 [2024-11-18 03:09:56.133359] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:52.644 [2024-11-18 03:09:56.133384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:52.644 [2024-11-18 03:09:56.133404] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:52.644 [2024-11-18 03:09:56.133419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:52.644 [2024-11-18 03:09:56.133426] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:52.644 [2024-11-18 03:09:56.133434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:52.644 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.644 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:52.644 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.644 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.644 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.644 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.644 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.644 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.644 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.644 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.644 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.644 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.644 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.644 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.644 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.644 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.644 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.644 "name": "Existed_Raid", 00:09:52.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.644 "strip_size_kb": 64, 00:09:52.644 "state": "configuring", 00:09:52.644 "raid_level": "raid0", 00:09:52.644 "superblock": false, 00:09:52.644 "num_base_bdevs": 4, 00:09:52.644 "num_base_bdevs_discovered": 0, 00:09:52.644 "num_base_bdevs_operational": 4, 00:09:52.644 "base_bdevs_list": [ 00:09:52.644 { 00:09:52.644 "name": "BaseBdev1", 00:09:52.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.644 "is_configured": false, 00:09:52.644 "data_offset": 0, 00:09:52.644 "data_size": 0 00:09:52.644 }, 00:09:52.644 { 00:09:52.644 "name": "BaseBdev2", 00:09:52.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.644 "is_configured": false, 00:09:52.644 "data_offset": 0, 00:09:52.644 "data_size": 0 00:09:52.644 }, 00:09:52.644 { 00:09:52.644 "name": "BaseBdev3", 00:09:52.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.644 "is_configured": false, 00:09:52.644 "data_offset": 0, 00:09:52.644 "data_size": 0 00:09:52.644 }, 00:09:52.644 { 00:09:52.644 "name": "BaseBdev4", 00:09:52.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.644 "is_configured": false, 00:09:52.644 "data_offset": 0, 00:09:52.644 "data_size": 0 00:09:52.644 } 00:09:52.644 ] 00:09:52.644 }' 00:09:52.644 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.644 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.215 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:53.215 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.215 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.215 [2024-11-18 03:09:56.628267] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:53.215 [2024-11-18 03:09:56.628376] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:53.215 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.215 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:53.215 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.215 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.215 [2024-11-18 03:09:56.640276] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:53.215 [2024-11-18 03:09:56.640359] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:53.215 [2024-11-18 03:09:56.640387] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:53.215 [2024-11-18 03:09:56.640410] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:53.215 [2024-11-18 03:09:56.640428] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:53.215 [2024-11-18 03:09:56.640450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:53.215 [2024-11-18 03:09:56.640468] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:53.215 [2024-11-18 03:09:56.640507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:53.215 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.215 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:53.215 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.215 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.215 [2024-11-18 03:09:56.661266] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:53.215 BaseBdev1 00:09:53.215 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.215 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:53.215 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:53.215 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:53.215 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:53.215 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:53.215 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:53.215 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:53.215 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.215 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.215 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.215 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:53.215 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.215 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.215 [ 00:09:53.215 { 00:09:53.215 "name": "BaseBdev1", 00:09:53.215 "aliases": [ 00:09:53.215 "06108450-aceb-4cf9-aabc-75c516f8b7e7" 00:09:53.215 ], 00:09:53.215 "product_name": "Malloc disk", 00:09:53.215 "block_size": 512, 00:09:53.215 "num_blocks": 65536, 00:09:53.215 "uuid": "06108450-aceb-4cf9-aabc-75c516f8b7e7", 00:09:53.215 "assigned_rate_limits": { 00:09:53.215 "rw_ios_per_sec": 0, 00:09:53.215 "rw_mbytes_per_sec": 0, 00:09:53.215 "r_mbytes_per_sec": 0, 00:09:53.215 "w_mbytes_per_sec": 0 00:09:53.215 }, 00:09:53.215 "claimed": true, 00:09:53.215 "claim_type": "exclusive_write", 00:09:53.215 "zoned": false, 00:09:53.215 "supported_io_types": { 00:09:53.215 "read": true, 00:09:53.215 "write": true, 00:09:53.215 "unmap": true, 00:09:53.215 "flush": true, 00:09:53.215 "reset": true, 00:09:53.215 "nvme_admin": false, 00:09:53.215 "nvme_io": false, 00:09:53.215 "nvme_io_md": false, 00:09:53.215 "write_zeroes": true, 00:09:53.215 "zcopy": true, 00:09:53.215 "get_zone_info": false, 00:09:53.215 "zone_management": false, 00:09:53.215 "zone_append": false, 00:09:53.215 "compare": false, 00:09:53.215 "compare_and_write": false, 00:09:53.215 "abort": true, 00:09:53.215 "seek_hole": false, 00:09:53.215 "seek_data": false, 00:09:53.215 "copy": true, 00:09:53.215 "nvme_iov_md": false 00:09:53.215 }, 00:09:53.215 "memory_domains": [ 00:09:53.215 { 00:09:53.215 "dma_device_id": "system", 00:09:53.215 "dma_device_type": 1 00:09:53.215 }, 00:09:53.215 { 00:09:53.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.215 "dma_device_type": 2 00:09:53.215 } 00:09:53.215 ], 00:09:53.215 "driver_specific": {} 00:09:53.215 } 00:09:53.215 ] 00:09:53.215 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.215 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:53.215 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:53.215 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.215 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.216 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.216 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.216 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.216 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.216 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.216 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.216 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.216 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.216 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.216 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.216 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.216 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.216 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.216 "name": "Existed_Raid", 00:09:53.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.216 "strip_size_kb": 64, 00:09:53.216 "state": "configuring", 00:09:53.216 "raid_level": "raid0", 00:09:53.216 "superblock": false, 00:09:53.216 "num_base_bdevs": 4, 00:09:53.216 "num_base_bdevs_discovered": 1, 00:09:53.216 "num_base_bdevs_operational": 4, 00:09:53.216 "base_bdevs_list": [ 00:09:53.216 { 00:09:53.216 "name": "BaseBdev1", 00:09:53.216 "uuid": "06108450-aceb-4cf9-aabc-75c516f8b7e7", 00:09:53.216 "is_configured": true, 00:09:53.216 "data_offset": 0, 00:09:53.216 "data_size": 65536 00:09:53.216 }, 00:09:53.216 { 00:09:53.216 "name": "BaseBdev2", 00:09:53.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.216 "is_configured": false, 00:09:53.216 "data_offset": 0, 00:09:53.216 "data_size": 0 00:09:53.216 }, 00:09:53.216 { 00:09:53.216 "name": "BaseBdev3", 00:09:53.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.216 "is_configured": false, 00:09:53.216 "data_offset": 0, 00:09:53.216 "data_size": 0 00:09:53.216 }, 00:09:53.216 { 00:09:53.216 "name": "BaseBdev4", 00:09:53.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.216 "is_configured": false, 00:09:53.216 "data_offset": 0, 00:09:53.216 "data_size": 0 00:09:53.216 } 00:09:53.216 ] 00:09:53.216 }' 00:09:53.216 03:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.216 03:09:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.785 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:53.785 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.785 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.785 [2024-11-18 03:09:57.176476] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:53.785 [2024-11-18 03:09:57.176585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:53.785 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.785 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:53.785 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.785 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.785 [2024-11-18 03:09:57.188480] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:53.785 [2024-11-18 03:09:57.190495] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:53.785 [2024-11-18 03:09:57.190569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:53.785 [2024-11-18 03:09:57.190612] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:53.785 [2024-11-18 03:09:57.190635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:53.785 [2024-11-18 03:09:57.190653] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:53.785 [2024-11-18 03:09:57.190675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:53.785 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.785 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:53.785 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:53.785 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:53.785 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.785 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.785 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.785 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.785 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.785 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.785 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.785 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.785 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.785 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.786 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.786 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.786 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.786 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.786 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.786 "name": "Existed_Raid", 00:09:53.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.786 "strip_size_kb": 64, 00:09:53.786 "state": "configuring", 00:09:53.786 "raid_level": "raid0", 00:09:53.786 "superblock": false, 00:09:53.786 "num_base_bdevs": 4, 00:09:53.786 "num_base_bdevs_discovered": 1, 00:09:53.786 "num_base_bdevs_operational": 4, 00:09:53.786 "base_bdevs_list": [ 00:09:53.786 { 00:09:53.786 "name": "BaseBdev1", 00:09:53.786 "uuid": "06108450-aceb-4cf9-aabc-75c516f8b7e7", 00:09:53.786 "is_configured": true, 00:09:53.786 "data_offset": 0, 00:09:53.786 "data_size": 65536 00:09:53.786 }, 00:09:53.786 { 00:09:53.786 "name": "BaseBdev2", 00:09:53.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.786 "is_configured": false, 00:09:53.786 "data_offset": 0, 00:09:53.786 "data_size": 0 00:09:53.786 }, 00:09:53.786 { 00:09:53.786 "name": "BaseBdev3", 00:09:53.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.786 "is_configured": false, 00:09:53.786 "data_offset": 0, 00:09:53.786 "data_size": 0 00:09:53.786 }, 00:09:53.786 { 00:09:53.786 "name": "BaseBdev4", 00:09:53.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.786 "is_configured": false, 00:09:53.786 "data_offset": 0, 00:09:53.786 "data_size": 0 00:09:53.786 } 00:09:53.786 ] 00:09:53.786 }' 00:09:53.786 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.786 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.355 [2024-11-18 03:09:57.657906] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:54.355 BaseBdev2 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.355 [ 00:09:54.355 { 00:09:54.355 "name": "BaseBdev2", 00:09:54.355 "aliases": [ 00:09:54.355 "1a996847-1f12-4ddf-a6ca-fb0ccde582a4" 00:09:54.355 ], 00:09:54.355 "product_name": "Malloc disk", 00:09:54.355 "block_size": 512, 00:09:54.355 "num_blocks": 65536, 00:09:54.355 "uuid": "1a996847-1f12-4ddf-a6ca-fb0ccde582a4", 00:09:54.355 "assigned_rate_limits": { 00:09:54.355 "rw_ios_per_sec": 0, 00:09:54.355 "rw_mbytes_per_sec": 0, 00:09:54.355 "r_mbytes_per_sec": 0, 00:09:54.355 "w_mbytes_per_sec": 0 00:09:54.355 }, 00:09:54.355 "claimed": true, 00:09:54.355 "claim_type": "exclusive_write", 00:09:54.355 "zoned": false, 00:09:54.355 "supported_io_types": { 00:09:54.355 "read": true, 00:09:54.355 "write": true, 00:09:54.355 "unmap": true, 00:09:54.355 "flush": true, 00:09:54.355 "reset": true, 00:09:54.355 "nvme_admin": false, 00:09:54.355 "nvme_io": false, 00:09:54.355 "nvme_io_md": false, 00:09:54.355 "write_zeroes": true, 00:09:54.355 "zcopy": true, 00:09:54.355 "get_zone_info": false, 00:09:54.355 "zone_management": false, 00:09:54.355 "zone_append": false, 00:09:54.355 "compare": false, 00:09:54.355 "compare_and_write": false, 00:09:54.355 "abort": true, 00:09:54.355 "seek_hole": false, 00:09:54.355 "seek_data": false, 00:09:54.355 "copy": true, 00:09:54.355 "nvme_iov_md": false 00:09:54.355 }, 00:09:54.355 "memory_domains": [ 00:09:54.355 { 00:09:54.355 "dma_device_id": "system", 00:09:54.355 "dma_device_type": 1 00:09:54.355 }, 00:09:54.355 { 00:09:54.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.355 "dma_device_type": 2 00:09:54.355 } 00:09:54.355 ], 00:09:54.355 "driver_specific": {} 00:09:54.355 } 00:09:54.355 ] 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.355 "name": "Existed_Raid", 00:09:54.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.355 "strip_size_kb": 64, 00:09:54.355 "state": "configuring", 00:09:54.355 "raid_level": "raid0", 00:09:54.355 "superblock": false, 00:09:54.355 "num_base_bdevs": 4, 00:09:54.355 "num_base_bdevs_discovered": 2, 00:09:54.355 "num_base_bdevs_operational": 4, 00:09:54.355 "base_bdevs_list": [ 00:09:54.355 { 00:09:54.355 "name": "BaseBdev1", 00:09:54.355 "uuid": "06108450-aceb-4cf9-aabc-75c516f8b7e7", 00:09:54.355 "is_configured": true, 00:09:54.355 "data_offset": 0, 00:09:54.355 "data_size": 65536 00:09:54.355 }, 00:09:54.355 { 00:09:54.355 "name": "BaseBdev2", 00:09:54.355 "uuid": "1a996847-1f12-4ddf-a6ca-fb0ccde582a4", 00:09:54.355 "is_configured": true, 00:09:54.355 "data_offset": 0, 00:09:54.355 "data_size": 65536 00:09:54.355 }, 00:09:54.355 { 00:09:54.355 "name": "BaseBdev3", 00:09:54.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.355 "is_configured": false, 00:09:54.355 "data_offset": 0, 00:09:54.355 "data_size": 0 00:09:54.355 }, 00:09:54.355 { 00:09:54.355 "name": "BaseBdev4", 00:09:54.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.355 "is_configured": false, 00:09:54.355 "data_offset": 0, 00:09:54.355 "data_size": 0 00:09:54.355 } 00:09:54.355 ] 00:09:54.355 }' 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.355 03:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.614 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:54.614 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.614 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.873 [2024-11-18 03:09:58.200160] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:54.873 BaseBdev3 00:09:54.873 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.873 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:54.873 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:54.873 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:54.873 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:54.873 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:54.873 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:54.873 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:54.873 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.873 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.873 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.873 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:54.873 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.873 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.873 [ 00:09:54.873 { 00:09:54.873 "name": "BaseBdev3", 00:09:54.873 "aliases": [ 00:09:54.873 "f51fa229-62eb-4c12-a31f-edbf237ac776" 00:09:54.873 ], 00:09:54.873 "product_name": "Malloc disk", 00:09:54.873 "block_size": 512, 00:09:54.873 "num_blocks": 65536, 00:09:54.873 "uuid": "f51fa229-62eb-4c12-a31f-edbf237ac776", 00:09:54.873 "assigned_rate_limits": { 00:09:54.873 "rw_ios_per_sec": 0, 00:09:54.873 "rw_mbytes_per_sec": 0, 00:09:54.873 "r_mbytes_per_sec": 0, 00:09:54.873 "w_mbytes_per_sec": 0 00:09:54.873 }, 00:09:54.874 "claimed": true, 00:09:54.874 "claim_type": "exclusive_write", 00:09:54.874 "zoned": false, 00:09:54.874 "supported_io_types": { 00:09:54.874 "read": true, 00:09:54.874 "write": true, 00:09:54.874 "unmap": true, 00:09:54.874 "flush": true, 00:09:54.874 "reset": true, 00:09:54.874 "nvme_admin": false, 00:09:54.874 "nvme_io": false, 00:09:54.874 "nvme_io_md": false, 00:09:54.874 "write_zeroes": true, 00:09:54.874 "zcopy": true, 00:09:54.874 "get_zone_info": false, 00:09:54.874 "zone_management": false, 00:09:54.874 "zone_append": false, 00:09:54.874 "compare": false, 00:09:54.874 "compare_and_write": false, 00:09:54.874 "abort": true, 00:09:54.874 "seek_hole": false, 00:09:54.874 "seek_data": false, 00:09:54.874 "copy": true, 00:09:54.874 "nvme_iov_md": false 00:09:54.874 }, 00:09:54.874 "memory_domains": [ 00:09:54.874 { 00:09:54.874 "dma_device_id": "system", 00:09:54.874 "dma_device_type": 1 00:09:54.874 }, 00:09:54.874 { 00:09:54.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.874 "dma_device_type": 2 00:09:54.874 } 00:09:54.874 ], 00:09:54.874 "driver_specific": {} 00:09:54.874 } 00:09:54.874 ] 00:09:54.874 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.874 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:54.874 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:54.874 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:54.874 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:54.874 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.874 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.874 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.874 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.874 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.874 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.874 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.874 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.874 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.874 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.874 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.874 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.874 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.874 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.874 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.874 "name": "Existed_Raid", 00:09:54.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.874 "strip_size_kb": 64, 00:09:54.874 "state": "configuring", 00:09:54.874 "raid_level": "raid0", 00:09:54.874 "superblock": false, 00:09:54.874 "num_base_bdevs": 4, 00:09:54.874 "num_base_bdevs_discovered": 3, 00:09:54.874 "num_base_bdevs_operational": 4, 00:09:54.874 "base_bdevs_list": [ 00:09:54.874 { 00:09:54.874 "name": "BaseBdev1", 00:09:54.874 "uuid": "06108450-aceb-4cf9-aabc-75c516f8b7e7", 00:09:54.874 "is_configured": true, 00:09:54.874 "data_offset": 0, 00:09:54.874 "data_size": 65536 00:09:54.874 }, 00:09:54.874 { 00:09:54.874 "name": "BaseBdev2", 00:09:54.874 "uuid": "1a996847-1f12-4ddf-a6ca-fb0ccde582a4", 00:09:54.874 "is_configured": true, 00:09:54.874 "data_offset": 0, 00:09:54.874 "data_size": 65536 00:09:54.874 }, 00:09:54.874 { 00:09:54.874 "name": "BaseBdev3", 00:09:54.874 "uuid": "f51fa229-62eb-4c12-a31f-edbf237ac776", 00:09:54.874 "is_configured": true, 00:09:54.874 "data_offset": 0, 00:09:54.874 "data_size": 65536 00:09:54.874 }, 00:09:54.874 { 00:09:54.874 "name": "BaseBdev4", 00:09:54.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.874 "is_configured": false, 00:09:54.874 "data_offset": 0, 00:09:54.874 "data_size": 0 00:09:54.874 } 00:09:54.874 ] 00:09:54.874 }' 00:09:54.874 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.874 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.134 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:55.134 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.134 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.134 [2024-11-18 03:09:58.702475] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:55.134 [2024-11-18 03:09:58.702603] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:55.134 [2024-11-18 03:09:58.702632] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:55.134 [2024-11-18 03:09:58.702931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:55.134 [2024-11-18 03:09:58.703162] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:55.134 [2024-11-18 03:09:58.703216] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:55.134 [2024-11-18 03:09:58.703459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.134 BaseBdev4 00:09:55.134 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.134 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:55.134 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:55.134 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:55.134 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:55.134 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:55.134 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:55.134 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:55.393 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.393 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.393 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.393 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:55.393 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.393 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.393 [ 00:09:55.393 { 00:09:55.393 "name": "BaseBdev4", 00:09:55.393 "aliases": [ 00:09:55.393 "dd8a9201-10a3-43c0-b3f6-d95b87e36ca3" 00:09:55.393 ], 00:09:55.393 "product_name": "Malloc disk", 00:09:55.393 "block_size": 512, 00:09:55.393 "num_blocks": 65536, 00:09:55.393 "uuid": "dd8a9201-10a3-43c0-b3f6-d95b87e36ca3", 00:09:55.393 "assigned_rate_limits": { 00:09:55.393 "rw_ios_per_sec": 0, 00:09:55.393 "rw_mbytes_per_sec": 0, 00:09:55.393 "r_mbytes_per_sec": 0, 00:09:55.393 "w_mbytes_per_sec": 0 00:09:55.393 }, 00:09:55.393 "claimed": true, 00:09:55.393 "claim_type": "exclusive_write", 00:09:55.393 "zoned": false, 00:09:55.393 "supported_io_types": { 00:09:55.393 "read": true, 00:09:55.393 "write": true, 00:09:55.393 "unmap": true, 00:09:55.393 "flush": true, 00:09:55.393 "reset": true, 00:09:55.393 "nvme_admin": false, 00:09:55.393 "nvme_io": false, 00:09:55.393 "nvme_io_md": false, 00:09:55.393 "write_zeroes": true, 00:09:55.393 "zcopy": true, 00:09:55.393 "get_zone_info": false, 00:09:55.393 "zone_management": false, 00:09:55.393 "zone_append": false, 00:09:55.393 "compare": false, 00:09:55.393 "compare_and_write": false, 00:09:55.393 "abort": true, 00:09:55.393 "seek_hole": false, 00:09:55.393 "seek_data": false, 00:09:55.393 "copy": true, 00:09:55.393 "nvme_iov_md": false 00:09:55.393 }, 00:09:55.393 "memory_domains": [ 00:09:55.393 { 00:09:55.393 "dma_device_id": "system", 00:09:55.393 "dma_device_type": 1 00:09:55.393 }, 00:09:55.393 { 00:09:55.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.393 "dma_device_type": 2 00:09:55.393 } 00:09:55.393 ], 00:09:55.393 "driver_specific": {} 00:09:55.393 } 00:09:55.393 ] 00:09:55.393 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.393 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:55.393 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:55.393 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:55.393 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:55.393 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.393 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.393 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.393 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.393 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:55.393 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.393 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.393 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.393 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.393 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.393 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.393 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.393 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.393 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.393 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.393 "name": "Existed_Raid", 00:09:55.393 "uuid": "e37801da-1fba-4ccf-8c12-44d6b7adb7c8", 00:09:55.393 "strip_size_kb": 64, 00:09:55.393 "state": "online", 00:09:55.393 "raid_level": "raid0", 00:09:55.393 "superblock": false, 00:09:55.393 "num_base_bdevs": 4, 00:09:55.393 "num_base_bdevs_discovered": 4, 00:09:55.393 "num_base_bdevs_operational": 4, 00:09:55.393 "base_bdevs_list": [ 00:09:55.393 { 00:09:55.393 "name": "BaseBdev1", 00:09:55.393 "uuid": "06108450-aceb-4cf9-aabc-75c516f8b7e7", 00:09:55.393 "is_configured": true, 00:09:55.393 "data_offset": 0, 00:09:55.393 "data_size": 65536 00:09:55.393 }, 00:09:55.393 { 00:09:55.393 "name": "BaseBdev2", 00:09:55.393 "uuid": "1a996847-1f12-4ddf-a6ca-fb0ccde582a4", 00:09:55.393 "is_configured": true, 00:09:55.393 "data_offset": 0, 00:09:55.393 "data_size": 65536 00:09:55.393 }, 00:09:55.393 { 00:09:55.393 "name": "BaseBdev3", 00:09:55.393 "uuid": "f51fa229-62eb-4c12-a31f-edbf237ac776", 00:09:55.393 "is_configured": true, 00:09:55.393 "data_offset": 0, 00:09:55.393 "data_size": 65536 00:09:55.393 }, 00:09:55.393 { 00:09:55.393 "name": "BaseBdev4", 00:09:55.393 "uuid": "dd8a9201-10a3-43c0-b3f6-d95b87e36ca3", 00:09:55.393 "is_configured": true, 00:09:55.393 "data_offset": 0, 00:09:55.393 "data_size": 65536 00:09:55.393 } 00:09:55.393 ] 00:09:55.393 }' 00:09:55.393 03:09:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.393 03:09:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.652 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:55.652 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:55.652 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:55.652 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:55.652 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:55.652 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:55.652 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:55.652 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.652 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.652 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:55.652 [2024-11-18 03:09:59.110211] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.652 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.652 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:55.652 "name": "Existed_Raid", 00:09:55.652 "aliases": [ 00:09:55.652 "e37801da-1fba-4ccf-8c12-44d6b7adb7c8" 00:09:55.652 ], 00:09:55.652 "product_name": "Raid Volume", 00:09:55.652 "block_size": 512, 00:09:55.652 "num_blocks": 262144, 00:09:55.652 "uuid": "e37801da-1fba-4ccf-8c12-44d6b7adb7c8", 00:09:55.652 "assigned_rate_limits": { 00:09:55.652 "rw_ios_per_sec": 0, 00:09:55.652 "rw_mbytes_per_sec": 0, 00:09:55.652 "r_mbytes_per_sec": 0, 00:09:55.652 "w_mbytes_per_sec": 0 00:09:55.652 }, 00:09:55.652 "claimed": false, 00:09:55.652 "zoned": false, 00:09:55.653 "supported_io_types": { 00:09:55.653 "read": true, 00:09:55.653 "write": true, 00:09:55.653 "unmap": true, 00:09:55.653 "flush": true, 00:09:55.653 "reset": true, 00:09:55.653 "nvme_admin": false, 00:09:55.653 "nvme_io": false, 00:09:55.653 "nvme_io_md": false, 00:09:55.653 "write_zeroes": true, 00:09:55.653 "zcopy": false, 00:09:55.653 "get_zone_info": false, 00:09:55.653 "zone_management": false, 00:09:55.653 "zone_append": false, 00:09:55.653 "compare": false, 00:09:55.653 "compare_and_write": false, 00:09:55.653 "abort": false, 00:09:55.653 "seek_hole": false, 00:09:55.653 "seek_data": false, 00:09:55.653 "copy": false, 00:09:55.653 "nvme_iov_md": false 00:09:55.653 }, 00:09:55.653 "memory_domains": [ 00:09:55.653 { 00:09:55.653 "dma_device_id": "system", 00:09:55.653 "dma_device_type": 1 00:09:55.653 }, 00:09:55.653 { 00:09:55.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.653 "dma_device_type": 2 00:09:55.653 }, 00:09:55.653 { 00:09:55.653 "dma_device_id": "system", 00:09:55.653 "dma_device_type": 1 00:09:55.653 }, 00:09:55.653 { 00:09:55.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.653 "dma_device_type": 2 00:09:55.653 }, 00:09:55.653 { 00:09:55.653 "dma_device_id": "system", 00:09:55.653 "dma_device_type": 1 00:09:55.653 }, 00:09:55.653 { 00:09:55.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.653 "dma_device_type": 2 00:09:55.653 }, 00:09:55.653 { 00:09:55.653 "dma_device_id": "system", 00:09:55.653 "dma_device_type": 1 00:09:55.653 }, 00:09:55.653 { 00:09:55.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.653 "dma_device_type": 2 00:09:55.653 } 00:09:55.653 ], 00:09:55.653 "driver_specific": { 00:09:55.653 "raid": { 00:09:55.653 "uuid": "e37801da-1fba-4ccf-8c12-44d6b7adb7c8", 00:09:55.653 "strip_size_kb": 64, 00:09:55.653 "state": "online", 00:09:55.653 "raid_level": "raid0", 00:09:55.653 "superblock": false, 00:09:55.653 "num_base_bdevs": 4, 00:09:55.653 "num_base_bdevs_discovered": 4, 00:09:55.653 "num_base_bdevs_operational": 4, 00:09:55.653 "base_bdevs_list": [ 00:09:55.653 { 00:09:55.653 "name": "BaseBdev1", 00:09:55.653 "uuid": "06108450-aceb-4cf9-aabc-75c516f8b7e7", 00:09:55.653 "is_configured": true, 00:09:55.653 "data_offset": 0, 00:09:55.653 "data_size": 65536 00:09:55.653 }, 00:09:55.653 { 00:09:55.653 "name": "BaseBdev2", 00:09:55.653 "uuid": "1a996847-1f12-4ddf-a6ca-fb0ccde582a4", 00:09:55.653 "is_configured": true, 00:09:55.653 "data_offset": 0, 00:09:55.653 "data_size": 65536 00:09:55.653 }, 00:09:55.653 { 00:09:55.653 "name": "BaseBdev3", 00:09:55.653 "uuid": "f51fa229-62eb-4c12-a31f-edbf237ac776", 00:09:55.653 "is_configured": true, 00:09:55.653 "data_offset": 0, 00:09:55.653 "data_size": 65536 00:09:55.653 }, 00:09:55.653 { 00:09:55.653 "name": "BaseBdev4", 00:09:55.653 "uuid": "dd8a9201-10a3-43c0-b3f6-d95b87e36ca3", 00:09:55.653 "is_configured": true, 00:09:55.653 "data_offset": 0, 00:09:55.653 "data_size": 65536 00:09:55.653 } 00:09:55.653 ] 00:09:55.653 } 00:09:55.653 } 00:09:55.653 }' 00:09:55.653 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:55.653 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:55.653 BaseBdev2 00:09:55.653 BaseBdev3 00:09:55.653 BaseBdev4' 00:09:55.653 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.912 [2024-11-18 03:09:59.445315] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:55.912 [2024-11-18 03:09:59.445391] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:55.912 [2024-11-18 03:09:59.445484] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.912 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.171 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.171 "name": "Existed_Raid", 00:09:56.171 "uuid": "e37801da-1fba-4ccf-8c12-44d6b7adb7c8", 00:09:56.171 "strip_size_kb": 64, 00:09:56.171 "state": "offline", 00:09:56.171 "raid_level": "raid0", 00:09:56.171 "superblock": false, 00:09:56.171 "num_base_bdevs": 4, 00:09:56.171 "num_base_bdevs_discovered": 3, 00:09:56.171 "num_base_bdevs_operational": 3, 00:09:56.171 "base_bdevs_list": [ 00:09:56.171 { 00:09:56.171 "name": null, 00:09:56.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.171 "is_configured": false, 00:09:56.171 "data_offset": 0, 00:09:56.171 "data_size": 65536 00:09:56.171 }, 00:09:56.171 { 00:09:56.171 "name": "BaseBdev2", 00:09:56.171 "uuid": "1a996847-1f12-4ddf-a6ca-fb0ccde582a4", 00:09:56.171 "is_configured": true, 00:09:56.171 "data_offset": 0, 00:09:56.171 "data_size": 65536 00:09:56.171 }, 00:09:56.171 { 00:09:56.171 "name": "BaseBdev3", 00:09:56.171 "uuid": "f51fa229-62eb-4c12-a31f-edbf237ac776", 00:09:56.171 "is_configured": true, 00:09:56.171 "data_offset": 0, 00:09:56.171 "data_size": 65536 00:09:56.171 }, 00:09:56.171 { 00:09:56.171 "name": "BaseBdev4", 00:09:56.171 "uuid": "dd8a9201-10a3-43c0-b3f6-d95b87e36ca3", 00:09:56.171 "is_configured": true, 00:09:56.171 "data_offset": 0, 00:09:56.171 "data_size": 65536 00:09:56.171 } 00:09:56.171 ] 00:09:56.171 }' 00:09:56.171 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.171 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.431 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:56.431 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:56.431 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:56.431 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.431 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.431 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.431 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.431 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:56.431 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:56.431 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:56.431 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.431 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.431 [2024-11-18 03:09:59.988073] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:56.431 03:09:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.431 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:56.431 03:09:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:56.432 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.432 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:56.432 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.432 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.691 [2024-11-18 03:10:00.055445] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.691 [2024-11-18 03:10:00.122691] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:56.691 [2024-11-18 03:10:00.122795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.691 BaseBdev2 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.691 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:56.692 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.692 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.692 [ 00:09:56.692 { 00:09:56.692 "name": "BaseBdev2", 00:09:56.692 "aliases": [ 00:09:56.692 "38ba63f5-3b36-45af-87a1-bd67b8eb5efa" 00:09:56.692 ], 00:09:56.692 "product_name": "Malloc disk", 00:09:56.692 "block_size": 512, 00:09:56.692 "num_blocks": 65536, 00:09:56.692 "uuid": "38ba63f5-3b36-45af-87a1-bd67b8eb5efa", 00:09:56.692 "assigned_rate_limits": { 00:09:56.692 "rw_ios_per_sec": 0, 00:09:56.692 "rw_mbytes_per_sec": 0, 00:09:56.692 "r_mbytes_per_sec": 0, 00:09:56.692 "w_mbytes_per_sec": 0 00:09:56.692 }, 00:09:56.692 "claimed": false, 00:09:56.692 "zoned": false, 00:09:56.692 "supported_io_types": { 00:09:56.692 "read": true, 00:09:56.692 "write": true, 00:09:56.692 "unmap": true, 00:09:56.692 "flush": true, 00:09:56.692 "reset": true, 00:09:56.692 "nvme_admin": false, 00:09:56.692 "nvme_io": false, 00:09:56.692 "nvme_io_md": false, 00:09:56.692 "write_zeroes": true, 00:09:56.692 "zcopy": true, 00:09:56.692 "get_zone_info": false, 00:09:56.692 "zone_management": false, 00:09:56.692 "zone_append": false, 00:09:56.692 "compare": false, 00:09:56.692 "compare_and_write": false, 00:09:56.692 "abort": true, 00:09:56.692 "seek_hole": false, 00:09:56.692 "seek_data": false, 00:09:56.692 "copy": true, 00:09:56.692 "nvme_iov_md": false 00:09:56.692 }, 00:09:56.692 "memory_domains": [ 00:09:56.692 { 00:09:56.692 "dma_device_id": "system", 00:09:56.692 "dma_device_type": 1 00:09:56.692 }, 00:09:56.692 { 00:09:56.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.692 "dma_device_type": 2 00:09:56.692 } 00:09:56.692 ], 00:09:56.692 "driver_specific": {} 00:09:56.692 } 00:09:56.692 ] 00:09:56.692 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.692 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:56.692 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:56.692 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:56.692 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:56.692 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.692 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.692 BaseBdev3 00:09:56.692 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.692 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:56.692 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:56.692 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:56.692 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:56.692 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:56.692 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:56.692 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:56.692 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.692 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.692 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.692 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:56.692 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.692 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.951 [ 00:09:56.951 { 00:09:56.951 "name": "BaseBdev3", 00:09:56.951 "aliases": [ 00:09:56.951 "78fb558c-2992-42fa-9148-2efbc3a264af" 00:09:56.951 ], 00:09:56.951 "product_name": "Malloc disk", 00:09:56.951 "block_size": 512, 00:09:56.951 "num_blocks": 65536, 00:09:56.951 "uuid": "78fb558c-2992-42fa-9148-2efbc3a264af", 00:09:56.951 "assigned_rate_limits": { 00:09:56.951 "rw_ios_per_sec": 0, 00:09:56.951 "rw_mbytes_per_sec": 0, 00:09:56.951 "r_mbytes_per_sec": 0, 00:09:56.951 "w_mbytes_per_sec": 0 00:09:56.951 }, 00:09:56.951 "claimed": false, 00:09:56.951 "zoned": false, 00:09:56.951 "supported_io_types": { 00:09:56.951 "read": true, 00:09:56.951 "write": true, 00:09:56.952 "unmap": true, 00:09:56.952 "flush": true, 00:09:56.952 "reset": true, 00:09:56.952 "nvme_admin": false, 00:09:56.952 "nvme_io": false, 00:09:56.952 "nvme_io_md": false, 00:09:56.952 "write_zeroes": true, 00:09:56.952 "zcopy": true, 00:09:56.952 "get_zone_info": false, 00:09:56.952 "zone_management": false, 00:09:56.952 "zone_append": false, 00:09:56.952 "compare": false, 00:09:56.952 "compare_and_write": false, 00:09:56.952 "abort": true, 00:09:56.952 "seek_hole": false, 00:09:56.952 "seek_data": false, 00:09:56.952 "copy": true, 00:09:56.952 "nvme_iov_md": false 00:09:56.952 }, 00:09:56.952 "memory_domains": [ 00:09:56.952 { 00:09:56.952 "dma_device_id": "system", 00:09:56.952 "dma_device_type": 1 00:09:56.952 }, 00:09:56.952 { 00:09:56.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.952 "dma_device_type": 2 00:09:56.952 } 00:09:56.952 ], 00:09:56.952 "driver_specific": {} 00:09:56.952 } 00:09:56.952 ] 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.952 BaseBdev4 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.952 [ 00:09:56.952 { 00:09:56.952 "name": "BaseBdev4", 00:09:56.952 "aliases": [ 00:09:56.952 "c9f4a9c3-8941-4bfa-bf64-1dca4f7cf355" 00:09:56.952 ], 00:09:56.952 "product_name": "Malloc disk", 00:09:56.952 "block_size": 512, 00:09:56.952 "num_blocks": 65536, 00:09:56.952 "uuid": "c9f4a9c3-8941-4bfa-bf64-1dca4f7cf355", 00:09:56.952 "assigned_rate_limits": { 00:09:56.952 "rw_ios_per_sec": 0, 00:09:56.952 "rw_mbytes_per_sec": 0, 00:09:56.952 "r_mbytes_per_sec": 0, 00:09:56.952 "w_mbytes_per_sec": 0 00:09:56.952 }, 00:09:56.952 "claimed": false, 00:09:56.952 "zoned": false, 00:09:56.952 "supported_io_types": { 00:09:56.952 "read": true, 00:09:56.952 "write": true, 00:09:56.952 "unmap": true, 00:09:56.952 "flush": true, 00:09:56.952 "reset": true, 00:09:56.952 "nvme_admin": false, 00:09:56.952 "nvme_io": false, 00:09:56.952 "nvme_io_md": false, 00:09:56.952 "write_zeroes": true, 00:09:56.952 "zcopy": true, 00:09:56.952 "get_zone_info": false, 00:09:56.952 "zone_management": false, 00:09:56.952 "zone_append": false, 00:09:56.952 "compare": false, 00:09:56.952 "compare_and_write": false, 00:09:56.952 "abort": true, 00:09:56.952 "seek_hole": false, 00:09:56.952 "seek_data": false, 00:09:56.952 "copy": true, 00:09:56.952 "nvme_iov_md": false 00:09:56.952 }, 00:09:56.952 "memory_domains": [ 00:09:56.952 { 00:09:56.952 "dma_device_id": "system", 00:09:56.952 "dma_device_type": 1 00:09:56.952 }, 00:09:56.952 { 00:09:56.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.952 "dma_device_type": 2 00:09:56.952 } 00:09:56.952 ], 00:09:56.952 "driver_specific": {} 00:09:56.952 } 00:09:56.952 ] 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.952 [2024-11-18 03:10:00.340077] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:56.952 [2024-11-18 03:10:00.340167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:56.952 [2024-11-18 03:10:00.340212] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:56.952 [2024-11-18 03:10:00.342245] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:56.952 [2024-11-18 03:10:00.342336] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.952 "name": "Existed_Raid", 00:09:56.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.952 "strip_size_kb": 64, 00:09:56.952 "state": "configuring", 00:09:56.952 "raid_level": "raid0", 00:09:56.952 "superblock": false, 00:09:56.952 "num_base_bdevs": 4, 00:09:56.952 "num_base_bdevs_discovered": 3, 00:09:56.952 "num_base_bdevs_operational": 4, 00:09:56.952 "base_bdevs_list": [ 00:09:56.952 { 00:09:56.952 "name": "BaseBdev1", 00:09:56.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.952 "is_configured": false, 00:09:56.952 "data_offset": 0, 00:09:56.952 "data_size": 0 00:09:56.952 }, 00:09:56.952 { 00:09:56.952 "name": "BaseBdev2", 00:09:56.952 "uuid": "38ba63f5-3b36-45af-87a1-bd67b8eb5efa", 00:09:56.952 "is_configured": true, 00:09:56.952 "data_offset": 0, 00:09:56.952 "data_size": 65536 00:09:56.952 }, 00:09:56.952 { 00:09:56.952 "name": "BaseBdev3", 00:09:56.952 "uuid": "78fb558c-2992-42fa-9148-2efbc3a264af", 00:09:56.952 "is_configured": true, 00:09:56.952 "data_offset": 0, 00:09:56.952 "data_size": 65536 00:09:56.952 }, 00:09:56.952 { 00:09:56.952 "name": "BaseBdev4", 00:09:56.952 "uuid": "c9f4a9c3-8941-4bfa-bf64-1dca4f7cf355", 00:09:56.952 "is_configured": true, 00:09:56.952 "data_offset": 0, 00:09:56.952 "data_size": 65536 00:09:56.952 } 00:09:56.952 ] 00:09:56.952 }' 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.952 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.212 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:57.212 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.212 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.212 [2024-11-18 03:10:00.775316] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:57.212 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.212 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:57.212 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.212 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.212 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.212 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.212 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.212 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.212 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.212 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.212 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.471 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.471 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.471 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.471 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.471 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.471 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.471 "name": "Existed_Raid", 00:09:57.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.471 "strip_size_kb": 64, 00:09:57.471 "state": "configuring", 00:09:57.471 "raid_level": "raid0", 00:09:57.471 "superblock": false, 00:09:57.471 "num_base_bdevs": 4, 00:09:57.471 "num_base_bdevs_discovered": 2, 00:09:57.471 "num_base_bdevs_operational": 4, 00:09:57.471 "base_bdevs_list": [ 00:09:57.471 { 00:09:57.471 "name": "BaseBdev1", 00:09:57.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.471 "is_configured": false, 00:09:57.471 "data_offset": 0, 00:09:57.471 "data_size": 0 00:09:57.471 }, 00:09:57.471 { 00:09:57.471 "name": null, 00:09:57.471 "uuid": "38ba63f5-3b36-45af-87a1-bd67b8eb5efa", 00:09:57.471 "is_configured": false, 00:09:57.471 "data_offset": 0, 00:09:57.471 "data_size": 65536 00:09:57.471 }, 00:09:57.471 { 00:09:57.471 "name": "BaseBdev3", 00:09:57.471 "uuid": "78fb558c-2992-42fa-9148-2efbc3a264af", 00:09:57.471 "is_configured": true, 00:09:57.471 "data_offset": 0, 00:09:57.471 "data_size": 65536 00:09:57.471 }, 00:09:57.471 { 00:09:57.471 "name": "BaseBdev4", 00:09:57.471 "uuid": "c9f4a9c3-8941-4bfa-bf64-1dca4f7cf355", 00:09:57.471 "is_configured": true, 00:09:57.471 "data_offset": 0, 00:09:57.471 "data_size": 65536 00:09:57.471 } 00:09:57.471 ] 00:09:57.471 }' 00:09:57.471 03:10:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.471 03:10:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.731 [2024-11-18 03:10:01.217812] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:57.731 BaseBdev1 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.731 [ 00:09:57.731 { 00:09:57.731 "name": "BaseBdev1", 00:09:57.731 "aliases": [ 00:09:57.731 "2e222baf-dee8-49c1-b538-2822a1271cc7" 00:09:57.731 ], 00:09:57.731 "product_name": "Malloc disk", 00:09:57.731 "block_size": 512, 00:09:57.731 "num_blocks": 65536, 00:09:57.731 "uuid": "2e222baf-dee8-49c1-b538-2822a1271cc7", 00:09:57.731 "assigned_rate_limits": { 00:09:57.731 "rw_ios_per_sec": 0, 00:09:57.731 "rw_mbytes_per_sec": 0, 00:09:57.731 "r_mbytes_per_sec": 0, 00:09:57.731 "w_mbytes_per_sec": 0 00:09:57.731 }, 00:09:57.731 "claimed": true, 00:09:57.731 "claim_type": "exclusive_write", 00:09:57.731 "zoned": false, 00:09:57.731 "supported_io_types": { 00:09:57.731 "read": true, 00:09:57.731 "write": true, 00:09:57.731 "unmap": true, 00:09:57.731 "flush": true, 00:09:57.731 "reset": true, 00:09:57.731 "nvme_admin": false, 00:09:57.731 "nvme_io": false, 00:09:57.731 "nvme_io_md": false, 00:09:57.731 "write_zeroes": true, 00:09:57.731 "zcopy": true, 00:09:57.731 "get_zone_info": false, 00:09:57.731 "zone_management": false, 00:09:57.731 "zone_append": false, 00:09:57.731 "compare": false, 00:09:57.731 "compare_and_write": false, 00:09:57.731 "abort": true, 00:09:57.731 "seek_hole": false, 00:09:57.731 "seek_data": false, 00:09:57.731 "copy": true, 00:09:57.731 "nvme_iov_md": false 00:09:57.731 }, 00:09:57.731 "memory_domains": [ 00:09:57.731 { 00:09:57.731 "dma_device_id": "system", 00:09:57.731 "dma_device_type": 1 00:09:57.731 }, 00:09:57.731 { 00:09:57.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.731 "dma_device_type": 2 00:09:57.731 } 00:09:57.731 ], 00:09:57.731 "driver_specific": {} 00:09:57.731 } 00:09:57.731 ] 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.731 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.989 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.989 "name": "Existed_Raid", 00:09:57.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.989 "strip_size_kb": 64, 00:09:57.989 "state": "configuring", 00:09:57.989 "raid_level": "raid0", 00:09:57.989 "superblock": false, 00:09:57.989 "num_base_bdevs": 4, 00:09:57.989 "num_base_bdevs_discovered": 3, 00:09:57.989 "num_base_bdevs_operational": 4, 00:09:57.989 "base_bdevs_list": [ 00:09:57.989 { 00:09:57.989 "name": "BaseBdev1", 00:09:57.989 "uuid": "2e222baf-dee8-49c1-b538-2822a1271cc7", 00:09:57.989 "is_configured": true, 00:09:57.989 "data_offset": 0, 00:09:57.989 "data_size": 65536 00:09:57.989 }, 00:09:57.989 { 00:09:57.989 "name": null, 00:09:57.989 "uuid": "38ba63f5-3b36-45af-87a1-bd67b8eb5efa", 00:09:57.989 "is_configured": false, 00:09:57.989 "data_offset": 0, 00:09:57.989 "data_size": 65536 00:09:57.989 }, 00:09:57.989 { 00:09:57.989 "name": "BaseBdev3", 00:09:57.989 "uuid": "78fb558c-2992-42fa-9148-2efbc3a264af", 00:09:57.989 "is_configured": true, 00:09:57.989 "data_offset": 0, 00:09:57.989 "data_size": 65536 00:09:57.989 }, 00:09:57.989 { 00:09:57.989 "name": "BaseBdev4", 00:09:57.989 "uuid": "c9f4a9c3-8941-4bfa-bf64-1dca4f7cf355", 00:09:57.989 "is_configured": true, 00:09:57.989 "data_offset": 0, 00:09:57.989 "data_size": 65536 00:09:57.989 } 00:09:57.989 ] 00:09:57.989 }' 00:09:57.989 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.989 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.248 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:58.248 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.248 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.248 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.248 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.248 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:58.248 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:58.248 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.248 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.248 [2024-11-18 03:10:01.740998] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:58.248 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.248 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:58.248 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.248 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.248 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.248 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.248 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.248 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.248 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.248 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.248 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.248 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.248 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.248 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.248 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.248 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.248 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.248 "name": "Existed_Raid", 00:09:58.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.248 "strip_size_kb": 64, 00:09:58.248 "state": "configuring", 00:09:58.248 "raid_level": "raid0", 00:09:58.248 "superblock": false, 00:09:58.248 "num_base_bdevs": 4, 00:09:58.248 "num_base_bdevs_discovered": 2, 00:09:58.248 "num_base_bdevs_operational": 4, 00:09:58.248 "base_bdevs_list": [ 00:09:58.248 { 00:09:58.248 "name": "BaseBdev1", 00:09:58.248 "uuid": "2e222baf-dee8-49c1-b538-2822a1271cc7", 00:09:58.248 "is_configured": true, 00:09:58.248 "data_offset": 0, 00:09:58.248 "data_size": 65536 00:09:58.248 }, 00:09:58.248 { 00:09:58.248 "name": null, 00:09:58.248 "uuid": "38ba63f5-3b36-45af-87a1-bd67b8eb5efa", 00:09:58.248 "is_configured": false, 00:09:58.248 "data_offset": 0, 00:09:58.248 "data_size": 65536 00:09:58.248 }, 00:09:58.248 { 00:09:58.248 "name": null, 00:09:58.248 "uuid": "78fb558c-2992-42fa-9148-2efbc3a264af", 00:09:58.248 "is_configured": false, 00:09:58.248 "data_offset": 0, 00:09:58.248 "data_size": 65536 00:09:58.248 }, 00:09:58.248 { 00:09:58.248 "name": "BaseBdev4", 00:09:58.248 "uuid": "c9f4a9c3-8941-4bfa-bf64-1dca4f7cf355", 00:09:58.248 "is_configured": true, 00:09:58.248 "data_offset": 0, 00:09:58.248 "data_size": 65536 00:09:58.248 } 00:09:58.248 ] 00:09:58.248 }' 00:09:58.248 03:10:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.248 03:10:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.817 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.817 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:58.817 03:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.817 03:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.818 03:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.818 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:58.818 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:58.818 03:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.818 03:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.818 [2024-11-18 03:10:02.256155] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:58.818 03:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.818 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:58.818 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.818 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.818 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.818 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.818 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.818 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.818 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.818 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.818 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.818 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.818 03:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.818 03:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.818 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.818 03:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.818 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.818 "name": "Existed_Raid", 00:09:58.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.818 "strip_size_kb": 64, 00:09:58.818 "state": "configuring", 00:09:58.818 "raid_level": "raid0", 00:09:58.818 "superblock": false, 00:09:58.818 "num_base_bdevs": 4, 00:09:58.818 "num_base_bdevs_discovered": 3, 00:09:58.818 "num_base_bdevs_operational": 4, 00:09:58.818 "base_bdevs_list": [ 00:09:58.818 { 00:09:58.818 "name": "BaseBdev1", 00:09:58.818 "uuid": "2e222baf-dee8-49c1-b538-2822a1271cc7", 00:09:58.818 "is_configured": true, 00:09:58.818 "data_offset": 0, 00:09:58.818 "data_size": 65536 00:09:58.818 }, 00:09:58.818 { 00:09:58.818 "name": null, 00:09:58.818 "uuid": "38ba63f5-3b36-45af-87a1-bd67b8eb5efa", 00:09:58.818 "is_configured": false, 00:09:58.818 "data_offset": 0, 00:09:58.818 "data_size": 65536 00:09:58.818 }, 00:09:58.818 { 00:09:58.818 "name": "BaseBdev3", 00:09:58.818 "uuid": "78fb558c-2992-42fa-9148-2efbc3a264af", 00:09:58.818 "is_configured": true, 00:09:58.818 "data_offset": 0, 00:09:58.818 "data_size": 65536 00:09:58.818 }, 00:09:58.818 { 00:09:58.818 "name": "BaseBdev4", 00:09:58.818 "uuid": "c9f4a9c3-8941-4bfa-bf64-1dca4f7cf355", 00:09:58.818 "is_configured": true, 00:09:58.818 "data_offset": 0, 00:09:58.818 "data_size": 65536 00:09:58.818 } 00:09:58.818 ] 00:09:58.818 }' 00:09:58.818 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.818 03:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.412 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.413 03:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.413 03:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.413 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:59.413 03:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.413 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:59.413 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:59.413 03:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.413 03:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.413 [2024-11-18 03:10:02.743317] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:59.413 03:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.413 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:59.413 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.413 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.413 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.413 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.413 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.413 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.413 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.413 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.413 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.413 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.413 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.413 03:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.413 03:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.413 03:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.413 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.413 "name": "Existed_Raid", 00:09:59.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.413 "strip_size_kb": 64, 00:09:59.413 "state": "configuring", 00:09:59.413 "raid_level": "raid0", 00:09:59.413 "superblock": false, 00:09:59.413 "num_base_bdevs": 4, 00:09:59.413 "num_base_bdevs_discovered": 2, 00:09:59.413 "num_base_bdevs_operational": 4, 00:09:59.413 "base_bdevs_list": [ 00:09:59.413 { 00:09:59.413 "name": null, 00:09:59.413 "uuid": "2e222baf-dee8-49c1-b538-2822a1271cc7", 00:09:59.413 "is_configured": false, 00:09:59.413 "data_offset": 0, 00:09:59.413 "data_size": 65536 00:09:59.413 }, 00:09:59.413 { 00:09:59.413 "name": null, 00:09:59.413 "uuid": "38ba63f5-3b36-45af-87a1-bd67b8eb5efa", 00:09:59.413 "is_configured": false, 00:09:59.413 "data_offset": 0, 00:09:59.413 "data_size": 65536 00:09:59.413 }, 00:09:59.413 { 00:09:59.413 "name": "BaseBdev3", 00:09:59.413 "uuid": "78fb558c-2992-42fa-9148-2efbc3a264af", 00:09:59.413 "is_configured": true, 00:09:59.413 "data_offset": 0, 00:09:59.413 "data_size": 65536 00:09:59.413 }, 00:09:59.413 { 00:09:59.413 "name": "BaseBdev4", 00:09:59.413 "uuid": "c9f4a9c3-8941-4bfa-bf64-1dca4f7cf355", 00:09:59.413 "is_configured": true, 00:09:59.413 "data_offset": 0, 00:09:59.413 "data_size": 65536 00:09:59.413 } 00:09:59.413 ] 00:09:59.413 }' 00:09:59.413 03:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.413 03:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.676 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:59.676 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.676 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.676 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.676 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.676 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:59.676 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:59.676 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.676 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.676 [2024-11-18 03:10:03.213002] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:59.676 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.676 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:59.676 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.676 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.676 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.676 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.676 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.676 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.676 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.676 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.676 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.676 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.676 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.676 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.676 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.676 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.936 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.936 "name": "Existed_Raid", 00:09:59.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.936 "strip_size_kb": 64, 00:09:59.936 "state": "configuring", 00:09:59.936 "raid_level": "raid0", 00:09:59.936 "superblock": false, 00:09:59.936 "num_base_bdevs": 4, 00:09:59.936 "num_base_bdevs_discovered": 3, 00:09:59.936 "num_base_bdevs_operational": 4, 00:09:59.936 "base_bdevs_list": [ 00:09:59.936 { 00:09:59.936 "name": null, 00:09:59.936 "uuid": "2e222baf-dee8-49c1-b538-2822a1271cc7", 00:09:59.936 "is_configured": false, 00:09:59.936 "data_offset": 0, 00:09:59.936 "data_size": 65536 00:09:59.936 }, 00:09:59.936 { 00:09:59.936 "name": "BaseBdev2", 00:09:59.936 "uuid": "38ba63f5-3b36-45af-87a1-bd67b8eb5efa", 00:09:59.936 "is_configured": true, 00:09:59.936 "data_offset": 0, 00:09:59.936 "data_size": 65536 00:09:59.936 }, 00:09:59.936 { 00:09:59.936 "name": "BaseBdev3", 00:09:59.936 "uuid": "78fb558c-2992-42fa-9148-2efbc3a264af", 00:09:59.936 "is_configured": true, 00:09:59.936 "data_offset": 0, 00:09:59.936 "data_size": 65536 00:09:59.936 }, 00:09:59.936 { 00:09:59.936 "name": "BaseBdev4", 00:09:59.936 "uuid": "c9f4a9c3-8941-4bfa-bf64-1dca4f7cf355", 00:09:59.936 "is_configured": true, 00:09:59.936 "data_offset": 0, 00:09:59.936 "data_size": 65536 00:09:59.936 } 00:09:59.936 ] 00:09:59.936 }' 00:09:59.936 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.936 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2e222baf-dee8-49c1-b538-2822a1271cc7 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.197 [2024-11-18 03:10:03.727210] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:00.197 [2024-11-18 03:10:03.727332] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:00.197 [2024-11-18 03:10:03.727357] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:00.197 [2024-11-18 03:10:03.727619] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:00.197 [2024-11-18 03:10:03.727769] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:00.197 [2024-11-18 03:10:03.727814] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:00.197 [2024-11-18 03:10:03.728036] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.197 NewBaseBdev 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.197 [ 00:10:00.197 { 00:10:00.197 "name": "NewBaseBdev", 00:10:00.197 "aliases": [ 00:10:00.197 "2e222baf-dee8-49c1-b538-2822a1271cc7" 00:10:00.197 ], 00:10:00.197 "product_name": "Malloc disk", 00:10:00.197 "block_size": 512, 00:10:00.197 "num_blocks": 65536, 00:10:00.197 "uuid": "2e222baf-dee8-49c1-b538-2822a1271cc7", 00:10:00.197 "assigned_rate_limits": { 00:10:00.197 "rw_ios_per_sec": 0, 00:10:00.197 "rw_mbytes_per_sec": 0, 00:10:00.197 "r_mbytes_per_sec": 0, 00:10:00.197 "w_mbytes_per_sec": 0 00:10:00.197 }, 00:10:00.197 "claimed": true, 00:10:00.197 "claim_type": "exclusive_write", 00:10:00.197 "zoned": false, 00:10:00.197 "supported_io_types": { 00:10:00.197 "read": true, 00:10:00.197 "write": true, 00:10:00.197 "unmap": true, 00:10:00.197 "flush": true, 00:10:00.197 "reset": true, 00:10:00.197 "nvme_admin": false, 00:10:00.197 "nvme_io": false, 00:10:00.197 "nvme_io_md": false, 00:10:00.197 "write_zeroes": true, 00:10:00.197 "zcopy": true, 00:10:00.197 "get_zone_info": false, 00:10:00.197 "zone_management": false, 00:10:00.197 "zone_append": false, 00:10:00.197 "compare": false, 00:10:00.197 "compare_and_write": false, 00:10:00.197 "abort": true, 00:10:00.197 "seek_hole": false, 00:10:00.197 "seek_data": false, 00:10:00.197 "copy": true, 00:10:00.197 "nvme_iov_md": false 00:10:00.197 }, 00:10:00.197 "memory_domains": [ 00:10:00.197 { 00:10:00.197 "dma_device_id": "system", 00:10:00.197 "dma_device_type": 1 00:10:00.197 }, 00:10:00.197 { 00:10:00.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.197 "dma_device_type": 2 00:10:00.197 } 00:10:00.197 ], 00:10:00.197 "driver_specific": {} 00:10:00.197 } 00:10:00.197 ] 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.197 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.458 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.458 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.458 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.458 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.458 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.458 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.458 "name": "Existed_Raid", 00:10:00.458 "uuid": "3519bca2-aff5-4cf7-ad7e-58b262749313", 00:10:00.458 "strip_size_kb": 64, 00:10:00.458 "state": "online", 00:10:00.458 "raid_level": "raid0", 00:10:00.458 "superblock": false, 00:10:00.458 "num_base_bdevs": 4, 00:10:00.458 "num_base_bdevs_discovered": 4, 00:10:00.458 "num_base_bdevs_operational": 4, 00:10:00.458 "base_bdevs_list": [ 00:10:00.458 { 00:10:00.458 "name": "NewBaseBdev", 00:10:00.458 "uuid": "2e222baf-dee8-49c1-b538-2822a1271cc7", 00:10:00.458 "is_configured": true, 00:10:00.458 "data_offset": 0, 00:10:00.458 "data_size": 65536 00:10:00.458 }, 00:10:00.458 { 00:10:00.458 "name": "BaseBdev2", 00:10:00.458 "uuid": "38ba63f5-3b36-45af-87a1-bd67b8eb5efa", 00:10:00.458 "is_configured": true, 00:10:00.458 "data_offset": 0, 00:10:00.458 "data_size": 65536 00:10:00.458 }, 00:10:00.458 { 00:10:00.458 "name": "BaseBdev3", 00:10:00.458 "uuid": "78fb558c-2992-42fa-9148-2efbc3a264af", 00:10:00.458 "is_configured": true, 00:10:00.458 "data_offset": 0, 00:10:00.458 "data_size": 65536 00:10:00.458 }, 00:10:00.458 { 00:10:00.458 "name": "BaseBdev4", 00:10:00.458 "uuid": "c9f4a9c3-8941-4bfa-bf64-1dca4f7cf355", 00:10:00.458 "is_configured": true, 00:10:00.458 "data_offset": 0, 00:10:00.458 "data_size": 65536 00:10:00.458 } 00:10:00.458 ] 00:10:00.458 }' 00:10:00.458 03:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.458 03:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.718 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:00.718 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:00.718 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:00.718 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:00.718 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:00.718 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:00.718 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:00.718 03:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.718 03:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.718 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:00.718 [2024-11-18 03:10:04.178858] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:00.719 03:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.719 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:00.719 "name": "Existed_Raid", 00:10:00.719 "aliases": [ 00:10:00.719 "3519bca2-aff5-4cf7-ad7e-58b262749313" 00:10:00.719 ], 00:10:00.719 "product_name": "Raid Volume", 00:10:00.719 "block_size": 512, 00:10:00.719 "num_blocks": 262144, 00:10:00.719 "uuid": "3519bca2-aff5-4cf7-ad7e-58b262749313", 00:10:00.719 "assigned_rate_limits": { 00:10:00.719 "rw_ios_per_sec": 0, 00:10:00.719 "rw_mbytes_per_sec": 0, 00:10:00.719 "r_mbytes_per_sec": 0, 00:10:00.719 "w_mbytes_per_sec": 0 00:10:00.719 }, 00:10:00.719 "claimed": false, 00:10:00.719 "zoned": false, 00:10:00.719 "supported_io_types": { 00:10:00.719 "read": true, 00:10:00.719 "write": true, 00:10:00.719 "unmap": true, 00:10:00.719 "flush": true, 00:10:00.719 "reset": true, 00:10:00.719 "nvme_admin": false, 00:10:00.719 "nvme_io": false, 00:10:00.719 "nvme_io_md": false, 00:10:00.719 "write_zeroes": true, 00:10:00.719 "zcopy": false, 00:10:00.719 "get_zone_info": false, 00:10:00.719 "zone_management": false, 00:10:00.719 "zone_append": false, 00:10:00.719 "compare": false, 00:10:00.719 "compare_and_write": false, 00:10:00.719 "abort": false, 00:10:00.719 "seek_hole": false, 00:10:00.719 "seek_data": false, 00:10:00.719 "copy": false, 00:10:00.719 "nvme_iov_md": false 00:10:00.719 }, 00:10:00.719 "memory_domains": [ 00:10:00.719 { 00:10:00.719 "dma_device_id": "system", 00:10:00.719 "dma_device_type": 1 00:10:00.719 }, 00:10:00.719 { 00:10:00.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.719 "dma_device_type": 2 00:10:00.719 }, 00:10:00.719 { 00:10:00.719 "dma_device_id": "system", 00:10:00.719 "dma_device_type": 1 00:10:00.719 }, 00:10:00.719 { 00:10:00.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.719 "dma_device_type": 2 00:10:00.719 }, 00:10:00.719 { 00:10:00.719 "dma_device_id": "system", 00:10:00.719 "dma_device_type": 1 00:10:00.719 }, 00:10:00.719 { 00:10:00.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.719 "dma_device_type": 2 00:10:00.719 }, 00:10:00.719 { 00:10:00.719 "dma_device_id": "system", 00:10:00.719 "dma_device_type": 1 00:10:00.719 }, 00:10:00.719 { 00:10:00.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.719 "dma_device_type": 2 00:10:00.719 } 00:10:00.719 ], 00:10:00.719 "driver_specific": { 00:10:00.719 "raid": { 00:10:00.719 "uuid": "3519bca2-aff5-4cf7-ad7e-58b262749313", 00:10:00.719 "strip_size_kb": 64, 00:10:00.719 "state": "online", 00:10:00.719 "raid_level": "raid0", 00:10:00.719 "superblock": false, 00:10:00.719 "num_base_bdevs": 4, 00:10:00.719 "num_base_bdevs_discovered": 4, 00:10:00.719 "num_base_bdevs_operational": 4, 00:10:00.719 "base_bdevs_list": [ 00:10:00.719 { 00:10:00.719 "name": "NewBaseBdev", 00:10:00.719 "uuid": "2e222baf-dee8-49c1-b538-2822a1271cc7", 00:10:00.719 "is_configured": true, 00:10:00.719 "data_offset": 0, 00:10:00.719 "data_size": 65536 00:10:00.719 }, 00:10:00.719 { 00:10:00.719 "name": "BaseBdev2", 00:10:00.719 "uuid": "38ba63f5-3b36-45af-87a1-bd67b8eb5efa", 00:10:00.719 "is_configured": true, 00:10:00.719 "data_offset": 0, 00:10:00.719 "data_size": 65536 00:10:00.719 }, 00:10:00.719 { 00:10:00.719 "name": "BaseBdev3", 00:10:00.719 "uuid": "78fb558c-2992-42fa-9148-2efbc3a264af", 00:10:00.719 "is_configured": true, 00:10:00.719 "data_offset": 0, 00:10:00.719 "data_size": 65536 00:10:00.719 }, 00:10:00.719 { 00:10:00.719 "name": "BaseBdev4", 00:10:00.719 "uuid": "c9f4a9c3-8941-4bfa-bf64-1dca4f7cf355", 00:10:00.719 "is_configured": true, 00:10:00.719 "data_offset": 0, 00:10:00.719 "data_size": 65536 00:10:00.719 } 00:10:00.719 ] 00:10:00.719 } 00:10:00.719 } 00:10:00.719 }' 00:10:00.719 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:00.719 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:00.719 BaseBdev2 00:10:00.719 BaseBdev3 00:10:00.719 BaseBdev4' 00:10:00.719 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.980 [2024-11-18 03:10:04.489954] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:00.980 [2024-11-18 03:10:04.490039] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:00.980 [2024-11-18 03:10:04.490137] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:00.980 [2024-11-18 03:10:04.490237] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:00.980 [2024-11-18 03:10:04.490286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80505 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 80505 ']' 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 80505 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80505 00:10:00.980 killing process with pid 80505 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80505' 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 80505 00:10:00.980 [2024-11-18 03:10:04.538162] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:00.980 03:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 80505 00:10:01.241 [2024-11-18 03:10:04.579471] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:01.501 03:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:01.501 00:10:01.501 real 0m9.631s 00:10:01.501 user 0m16.440s 00:10:01.501 sys 0m1.987s 00:10:01.501 03:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:01.501 03:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.501 ************************************ 00:10:01.501 END TEST raid_state_function_test 00:10:01.501 ************************************ 00:10:01.501 03:10:04 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:01.502 03:10:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:01.502 03:10:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:01.502 03:10:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:01.502 ************************************ 00:10:01.502 START TEST raid_state_function_test_sb 00:10:01.502 ************************************ 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81154 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:01.502 Process raid pid: 81154 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81154' 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81154 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 81154 ']' 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:01.502 03:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.502 [2024-11-18 03:10:04.995315] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:01.502 [2024-11-18 03:10:04.995532] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.762 [2024-11-18 03:10:05.156948] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.762 [2024-11-18 03:10:05.208354] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.762 [2024-11-18 03:10:05.252066] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:01.762 [2024-11-18 03:10:05.252098] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.332 03:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:02.333 03:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:02.333 03:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:02.333 03:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.333 03:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.333 [2024-11-18 03:10:05.853780] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.333 [2024-11-18 03:10:05.853904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.333 [2024-11-18 03:10:05.853957] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.333 [2024-11-18 03:10:05.854002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.333 [2024-11-18 03:10:05.854031] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:02.333 [2024-11-18 03:10:05.854060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:02.333 [2024-11-18 03:10:05.854129] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:02.333 [2024-11-18 03:10:05.854164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:02.333 03:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.333 03:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:02.333 03:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.333 03:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.333 03:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.333 03:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.333 03:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.333 03:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.333 03:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.333 03:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.333 03:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.333 03:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.333 03:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.333 03:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.333 03:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.333 03:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.593 03:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.593 "name": "Existed_Raid", 00:10:02.593 "uuid": "4f8b2894-a25e-4d62-9c1e-76ed6f7cb459", 00:10:02.593 "strip_size_kb": 64, 00:10:02.593 "state": "configuring", 00:10:02.593 "raid_level": "raid0", 00:10:02.593 "superblock": true, 00:10:02.593 "num_base_bdevs": 4, 00:10:02.593 "num_base_bdevs_discovered": 0, 00:10:02.593 "num_base_bdevs_operational": 4, 00:10:02.593 "base_bdevs_list": [ 00:10:02.593 { 00:10:02.593 "name": "BaseBdev1", 00:10:02.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.593 "is_configured": false, 00:10:02.593 "data_offset": 0, 00:10:02.593 "data_size": 0 00:10:02.593 }, 00:10:02.593 { 00:10:02.593 "name": "BaseBdev2", 00:10:02.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.593 "is_configured": false, 00:10:02.593 "data_offset": 0, 00:10:02.593 "data_size": 0 00:10:02.593 }, 00:10:02.593 { 00:10:02.593 "name": "BaseBdev3", 00:10:02.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.593 "is_configured": false, 00:10:02.593 "data_offset": 0, 00:10:02.593 "data_size": 0 00:10:02.593 }, 00:10:02.593 { 00:10:02.593 "name": "BaseBdev4", 00:10:02.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.593 "is_configured": false, 00:10:02.593 "data_offset": 0, 00:10:02.593 "data_size": 0 00:10:02.593 } 00:10:02.593 ] 00:10:02.593 }' 00:10:02.593 03:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.593 03:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.854 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:02.854 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.854 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.854 [2024-11-18 03:10:06.280991] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:02.854 [2024-11-18 03:10:06.281086] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:02.854 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.854 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:02.854 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.854 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.854 [2024-11-18 03:10:06.293003] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.854 [2024-11-18 03:10:06.293096] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.854 [2024-11-18 03:10:06.293126] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.854 [2024-11-18 03:10:06.293151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.854 [2024-11-18 03:10:06.293172] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:02.854 [2024-11-18 03:10:06.293195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:02.854 [2024-11-18 03:10:06.293216] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:02.854 [2024-11-18 03:10:06.293246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:02.854 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.854 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:02.854 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.854 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.854 [2024-11-18 03:10:06.314218] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:02.854 BaseBdev1 00:10:02.854 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.854 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:02.854 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:02.854 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:02.854 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:02.854 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:02.854 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:02.854 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:02.854 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.854 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.854 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.854 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:02.854 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.854 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.854 [ 00:10:02.854 { 00:10:02.854 "name": "BaseBdev1", 00:10:02.854 "aliases": [ 00:10:02.854 "6776c4fa-3d73-4e53-a0e3-03a67bc2a1b8" 00:10:02.854 ], 00:10:02.854 "product_name": "Malloc disk", 00:10:02.854 "block_size": 512, 00:10:02.854 "num_blocks": 65536, 00:10:02.854 "uuid": "6776c4fa-3d73-4e53-a0e3-03a67bc2a1b8", 00:10:02.854 "assigned_rate_limits": { 00:10:02.854 "rw_ios_per_sec": 0, 00:10:02.854 "rw_mbytes_per_sec": 0, 00:10:02.854 "r_mbytes_per_sec": 0, 00:10:02.854 "w_mbytes_per_sec": 0 00:10:02.854 }, 00:10:02.854 "claimed": true, 00:10:02.854 "claim_type": "exclusive_write", 00:10:02.854 "zoned": false, 00:10:02.854 "supported_io_types": { 00:10:02.854 "read": true, 00:10:02.854 "write": true, 00:10:02.854 "unmap": true, 00:10:02.855 "flush": true, 00:10:02.855 "reset": true, 00:10:02.855 "nvme_admin": false, 00:10:02.855 "nvme_io": false, 00:10:02.855 "nvme_io_md": false, 00:10:02.855 "write_zeroes": true, 00:10:02.855 "zcopy": true, 00:10:02.855 "get_zone_info": false, 00:10:02.855 "zone_management": false, 00:10:02.855 "zone_append": false, 00:10:02.855 "compare": false, 00:10:02.855 "compare_and_write": false, 00:10:02.855 "abort": true, 00:10:02.855 "seek_hole": false, 00:10:02.855 "seek_data": false, 00:10:02.855 "copy": true, 00:10:02.855 "nvme_iov_md": false 00:10:02.855 }, 00:10:02.855 "memory_domains": [ 00:10:02.855 { 00:10:02.855 "dma_device_id": "system", 00:10:02.855 "dma_device_type": 1 00:10:02.855 }, 00:10:02.855 { 00:10:02.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.855 "dma_device_type": 2 00:10:02.855 } 00:10:02.855 ], 00:10:02.855 "driver_specific": {} 00:10:02.855 } 00:10:02.855 ] 00:10:02.855 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.855 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:02.855 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:02.855 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.855 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.855 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.855 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.855 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.855 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.855 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.855 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.855 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.855 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.855 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.855 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.855 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.855 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.855 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.855 "name": "Existed_Raid", 00:10:02.855 "uuid": "4aca1963-ab6c-43b1-a8f8-ab6ecf671772", 00:10:02.855 "strip_size_kb": 64, 00:10:02.855 "state": "configuring", 00:10:02.855 "raid_level": "raid0", 00:10:02.855 "superblock": true, 00:10:02.855 "num_base_bdevs": 4, 00:10:02.855 "num_base_bdevs_discovered": 1, 00:10:02.855 "num_base_bdevs_operational": 4, 00:10:02.855 "base_bdevs_list": [ 00:10:02.855 { 00:10:02.855 "name": "BaseBdev1", 00:10:02.855 "uuid": "6776c4fa-3d73-4e53-a0e3-03a67bc2a1b8", 00:10:02.855 "is_configured": true, 00:10:02.855 "data_offset": 2048, 00:10:02.855 "data_size": 63488 00:10:02.855 }, 00:10:02.855 { 00:10:02.855 "name": "BaseBdev2", 00:10:02.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.855 "is_configured": false, 00:10:02.855 "data_offset": 0, 00:10:02.855 "data_size": 0 00:10:02.855 }, 00:10:02.855 { 00:10:02.855 "name": "BaseBdev3", 00:10:02.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.855 "is_configured": false, 00:10:02.855 "data_offset": 0, 00:10:02.855 "data_size": 0 00:10:02.855 }, 00:10:02.855 { 00:10:02.855 "name": "BaseBdev4", 00:10:02.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.855 "is_configured": false, 00:10:02.855 "data_offset": 0, 00:10:02.855 "data_size": 0 00:10:02.855 } 00:10:02.855 ] 00:10:02.855 }' 00:10:02.855 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.855 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.426 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:03.426 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.426 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.426 [2024-11-18 03:10:06.765497] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:03.426 [2024-11-18 03:10:06.765597] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:03.426 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.426 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:03.426 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.426 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.426 [2024-11-18 03:10:06.777518] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:03.426 [2024-11-18 03:10:06.779429] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:03.426 [2024-11-18 03:10:06.779480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:03.426 [2024-11-18 03:10:06.779491] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:03.426 [2024-11-18 03:10:06.779501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:03.426 [2024-11-18 03:10:06.779508] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:03.426 [2024-11-18 03:10:06.779518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:03.426 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.426 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:03.426 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.426 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:03.426 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.426 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.426 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.426 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.426 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.426 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.426 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.426 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.426 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.426 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.426 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.426 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.426 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.426 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.426 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.426 "name": "Existed_Raid", 00:10:03.426 "uuid": "57450a1b-ea88-49f8-8307-e92333540152", 00:10:03.426 "strip_size_kb": 64, 00:10:03.426 "state": "configuring", 00:10:03.426 "raid_level": "raid0", 00:10:03.426 "superblock": true, 00:10:03.426 "num_base_bdevs": 4, 00:10:03.426 "num_base_bdevs_discovered": 1, 00:10:03.426 "num_base_bdevs_operational": 4, 00:10:03.427 "base_bdevs_list": [ 00:10:03.427 { 00:10:03.427 "name": "BaseBdev1", 00:10:03.427 "uuid": "6776c4fa-3d73-4e53-a0e3-03a67bc2a1b8", 00:10:03.427 "is_configured": true, 00:10:03.427 "data_offset": 2048, 00:10:03.427 "data_size": 63488 00:10:03.427 }, 00:10:03.427 { 00:10:03.427 "name": "BaseBdev2", 00:10:03.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.427 "is_configured": false, 00:10:03.427 "data_offset": 0, 00:10:03.427 "data_size": 0 00:10:03.427 }, 00:10:03.427 { 00:10:03.427 "name": "BaseBdev3", 00:10:03.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.427 "is_configured": false, 00:10:03.427 "data_offset": 0, 00:10:03.427 "data_size": 0 00:10:03.427 }, 00:10:03.427 { 00:10:03.427 "name": "BaseBdev4", 00:10:03.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.427 "is_configured": false, 00:10:03.427 "data_offset": 0, 00:10:03.427 "data_size": 0 00:10:03.427 } 00:10:03.427 ] 00:10:03.427 }' 00:10:03.427 03:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.427 03:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.687 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:03.687 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.687 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.946 [2024-11-18 03:10:07.263442] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.946 BaseBdev2 00:10:03.946 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.946 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:03.946 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:03.946 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:03.946 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:03.946 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:03.946 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:03.946 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:03.946 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.946 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.946 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.946 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:03.946 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.946 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.946 [ 00:10:03.946 { 00:10:03.946 "name": "BaseBdev2", 00:10:03.946 "aliases": [ 00:10:03.946 "75553ff3-18a8-4ba4-9743-bf7510839d2b" 00:10:03.946 ], 00:10:03.946 "product_name": "Malloc disk", 00:10:03.946 "block_size": 512, 00:10:03.946 "num_blocks": 65536, 00:10:03.946 "uuid": "75553ff3-18a8-4ba4-9743-bf7510839d2b", 00:10:03.946 "assigned_rate_limits": { 00:10:03.946 "rw_ios_per_sec": 0, 00:10:03.946 "rw_mbytes_per_sec": 0, 00:10:03.946 "r_mbytes_per_sec": 0, 00:10:03.946 "w_mbytes_per_sec": 0 00:10:03.946 }, 00:10:03.946 "claimed": true, 00:10:03.946 "claim_type": "exclusive_write", 00:10:03.946 "zoned": false, 00:10:03.946 "supported_io_types": { 00:10:03.946 "read": true, 00:10:03.946 "write": true, 00:10:03.946 "unmap": true, 00:10:03.946 "flush": true, 00:10:03.946 "reset": true, 00:10:03.946 "nvme_admin": false, 00:10:03.946 "nvme_io": false, 00:10:03.946 "nvme_io_md": false, 00:10:03.946 "write_zeroes": true, 00:10:03.946 "zcopy": true, 00:10:03.946 "get_zone_info": false, 00:10:03.946 "zone_management": false, 00:10:03.946 "zone_append": false, 00:10:03.946 "compare": false, 00:10:03.946 "compare_and_write": false, 00:10:03.946 "abort": true, 00:10:03.946 "seek_hole": false, 00:10:03.946 "seek_data": false, 00:10:03.946 "copy": true, 00:10:03.946 "nvme_iov_md": false 00:10:03.947 }, 00:10:03.947 "memory_domains": [ 00:10:03.947 { 00:10:03.947 "dma_device_id": "system", 00:10:03.947 "dma_device_type": 1 00:10:03.947 }, 00:10:03.947 { 00:10:03.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.947 "dma_device_type": 2 00:10:03.947 } 00:10:03.947 ], 00:10:03.947 "driver_specific": {} 00:10:03.947 } 00:10:03.947 ] 00:10:03.947 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.947 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:03.947 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:03.947 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.947 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:03.947 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.947 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.947 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.947 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.947 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.947 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.947 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.947 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.947 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.947 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.947 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.947 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.947 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.947 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.947 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.947 "name": "Existed_Raid", 00:10:03.947 "uuid": "57450a1b-ea88-49f8-8307-e92333540152", 00:10:03.947 "strip_size_kb": 64, 00:10:03.947 "state": "configuring", 00:10:03.947 "raid_level": "raid0", 00:10:03.947 "superblock": true, 00:10:03.947 "num_base_bdevs": 4, 00:10:03.947 "num_base_bdevs_discovered": 2, 00:10:03.947 "num_base_bdevs_operational": 4, 00:10:03.947 "base_bdevs_list": [ 00:10:03.947 { 00:10:03.947 "name": "BaseBdev1", 00:10:03.947 "uuid": "6776c4fa-3d73-4e53-a0e3-03a67bc2a1b8", 00:10:03.947 "is_configured": true, 00:10:03.947 "data_offset": 2048, 00:10:03.947 "data_size": 63488 00:10:03.947 }, 00:10:03.947 { 00:10:03.947 "name": "BaseBdev2", 00:10:03.947 "uuid": "75553ff3-18a8-4ba4-9743-bf7510839d2b", 00:10:03.947 "is_configured": true, 00:10:03.947 "data_offset": 2048, 00:10:03.947 "data_size": 63488 00:10:03.947 }, 00:10:03.947 { 00:10:03.947 "name": "BaseBdev3", 00:10:03.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.947 "is_configured": false, 00:10:03.947 "data_offset": 0, 00:10:03.947 "data_size": 0 00:10:03.947 }, 00:10:03.947 { 00:10:03.947 "name": "BaseBdev4", 00:10:03.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.947 "is_configured": false, 00:10:03.947 "data_offset": 0, 00:10:03.947 "data_size": 0 00:10:03.947 } 00:10:03.947 ] 00:10:03.947 }' 00:10:03.947 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.947 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.206 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:04.206 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.206 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.206 [2024-11-18 03:10:07.773798] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.206 BaseBdev3 00:10:04.206 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.206 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:04.206 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:04.206 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:04.206 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:04.206 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:04.206 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:04.206 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:04.207 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.207 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.467 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.467 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:04.467 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.467 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.467 [ 00:10:04.467 { 00:10:04.467 "name": "BaseBdev3", 00:10:04.467 "aliases": [ 00:10:04.467 "869e3c99-5a83-4cb9-81aa-e9ecdfb4d2a4" 00:10:04.467 ], 00:10:04.467 "product_name": "Malloc disk", 00:10:04.467 "block_size": 512, 00:10:04.467 "num_blocks": 65536, 00:10:04.467 "uuid": "869e3c99-5a83-4cb9-81aa-e9ecdfb4d2a4", 00:10:04.467 "assigned_rate_limits": { 00:10:04.467 "rw_ios_per_sec": 0, 00:10:04.467 "rw_mbytes_per_sec": 0, 00:10:04.467 "r_mbytes_per_sec": 0, 00:10:04.467 "w_mbytes_per_sec": 0 00:10:04.467 }, 00:10:04.467 "claimed": true, 00:10:04.467 "claim_type": "exclusive_write", 00:10:04.467 "zoned": false, 00:10:04.467 "supported_io_types": { 00:10:04.467 "read": true, 00:10:04.467 "write": true, 00:10:04.467 "unmap": true, 00:10:04.467 "flush": true, 00:10:04.467 "reset": true, 00:10:04.467 "nvme_admin": false, 00:10:04.467 "nvme_io": false, 00:10:04.467 "nvme_io_md": false, 00:10:04.467 "write_zeroes": true, 00:10:04.467 "zcopy": true, 00:10:04.467 "get_zone_info": false, 00:10:04.467 "zone_management": false, 00:10:04.467 "zone_append": false, 00:10:04.467 "compare": false, 00:10:04.467 "compare_and_write": false, 00:10:04.467 "abort": true, 00:10:04.467 "seek_hole": false, 00:10:04.467 "seek_data": false, 00:10:04.467 "copy": true, 00:10:04.467 "nvme_iov_md": false 00:10:04.467 }, 00:10:04.467 "memory_domains": [ 00:10:04.467 { 00:10:04.467 "dma_device_id": "system", 00:10:04.467 "dma_device_type": 1 00:10:04.467 }, 00:10:04.467 { 00:10:04.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.467 "dma_device_type": 2 00:10:04.467 } 00:10:04.467 ], 00:10:04.467 "driver_specific": {} 00:10:04.467 } 00:10:04.467 ] 00:10:04.467 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.467 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:04.467 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:04.467 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:04.467 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:04.467 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.467 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.467 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.467 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.467 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.467 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.467 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.467 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.467 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.467 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.467 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.467 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.467 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.467 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.467 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.467 "name": "Existed_Raid", 00:10:04.467 "uuid": "57450a1b-ea88-49f8-8307-e92333540152", 00:10:04.467 "strip_size_kb": 64, 00:10:04.467 "state": "configuring", 00:10:04.467 "raid_level": "raid0", 00:10:04.467 "superblock": true, 00:10:04.467 "num_base_bdevs": 4, 00:10:04.467 "num_base_bdevs_discovered": 3, 00:10:04.467 "num_base_bdevs_operational": 4, 00:10:04.467 "base_bdevs_list": [ 00:10:04.467 { 00:10:04.467 "name": "BaseBdev1", 00:10:04.467 "uuid": "6776c4fa-3d73-4e53-a0e3-03a67bc2a1b8", 00:10:04.467 "is_configured": true, 00:10:04.467 "data_offset": 2048, 00:10:04.467 "data_size": 63488 00:10:04.467 }, 00:10:04.467 { 00:10:04.467 "name": "BaseBdev2", 00:10:04.467 "uuid": "75553ff3-18a8-4ba4-9743-bf7510839d2b", 00:10:04.467 "is_configured": true, 00:10:04.467 "data_offset": 2048, 00:10:04.467 "data_size": 63488 00:10:04.467 }, 00:10:04.467 { 00:10:04.467 "name": "BaseBdev3", 00:10:04.467 "uuid": "869e3c99-5a83-4cb9-81aa-e9ecdfb4d2a4", 00:10:04.467 "is_configured": true, 00:10:04.467 "data_offset": 2048, 00:10:04.467 "data_size": 63488 00:10:04.467 }, 00:10:04.467 { 00:10:04.467 "name": "BaseBdev4", 00:10:04.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.467 "is_configured": false, 00:10:04.467 "data_offset": 0, 00:10:04.467 "data_size": 0 00:10:04.467 } 00:10:04.467 ] 00:10:04.467 }' 00:10:04.467 03:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.467 03:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.728 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:04.728 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.728 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.728 [2024-11-18 03:10:08.212411] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:04.728 [2024-11-18 03:10:08.212720] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:04.728 [2024-11-18 03:10:08.212783] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:04.728 [2024-11-18 03:10:08.213095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:04.728 BaseBdev4 00:10:04.728 [2024-11-18 03:10:08.213258] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:04.728 [2024-11-18 03:10:08.213278] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:04.728 [2024-11-18 03:10:08.213386] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.728 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.728 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:04.728 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:04.728 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:04.728 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:04.728 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:04.728 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:04.728 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:04.728 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.728 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.728 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.728 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:04.728 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.728 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.728 [ 00:10:04.728 { 00:10:04.728 "name": "BaseBdev4", 00:10:04.728 "aliases": [ 00:10:04.728 "bd2ea8f2-8999-4382-93ea-05d5450ebd57" 00:10:04.728 ], 00:10:04.728 "product_name": "Malloc disk", 00:10:04.728 "block_size": 512, 00:10:04.728 "num_blocks": 65536, 00:10:04.728 "uuid": "bd2ea8f2-8999-4382-93ea-05d5450ebd57", 00:10:04.728 "assigned_rate_limits": { 00:10:04.728 "rw_ios_per_sec": 0, 00:10:04.728 "rw_mbytes_per_sec": 0, 00:10:04.728 "r_mbytes_per_sec": 0, 00:10:04.728 "w_mbytes_per_sec": 0 00:10:04.728 }, 00:10:04.728 "claimed": true, 00:10:04.728 "claim_type": "exclusive_write", 00:10:04.728 "zoned": false, 00:10:04.728 "supported_io_types": { 00:10:04.728 "read": true, 00:10:04.728 "write": true, 00:10:04.728 "unmap": true, 00:10:04.728 "flush": true, 00:10:04.728 "reset": true, 00:10:04.728 "nvme_admin": false, 00:10:04.728 "nvme_io": false, 00:10:04.728 "nvme_io_md": false, 00:10:04.728 "write_zeroes": true, 00:10:04.728 "zcopy": true, 00:10:04.728 "get_zone_info": false, 00:10:04.728 "zone_management": false, 00:10:04.728 "zone_append": false, 00:10:04.728 "compare": false, 00:10:04.728 "compare_and_write": false, 00:10:04.728 "abort": true, 00:10:04.728 "seek_hole": false, 00:10:04.728 "seek_data": false, 00:10:04.728 "copy": true, 00:10:04.728 "nvme_iov_md": false 00:10:04.728 }, 00:10:04.728 "memory_domains": [ 00:10:04.728 { 00:10:04.728 "dma_device_id": "system", 00:10:04.728 "dma_device_type": 1 00:10:04.728 }, 00:10:04.728 { 00:10:04.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.728 "dma_device_type": 2 00:10:04.728 } 00:10:04.728 ], 00:10:04.728 "driver_specific": {} 00:10:04.728 } 00:10:04.728 ] 00:10:04.728 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.728 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:04.728 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:04.728 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:04.728 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:04.728 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.728 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.728 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.728 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.728 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.728 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.728 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.728 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.729 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.729 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.729 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.729 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.729 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.729 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.988 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.988 "name": "Existed_Raid", 00:10:04.988 "uuid": "57450a1b-ea88-49f8-8307-e92333540152", 00:10:04.988 "strip_size_kb": 64, 00:10:04.988 "state": "online", 00:10:04.988 "raid_level": "raid0", 00:10:04.988 "superblock": true, 00:10:04.988 "num_base_bdevs": 4, 00:10:04.988 "num_base_bdevs_discovered": 4, 00:10:04.988 "num_base_bdevs_operational": 4, 00:10:04.988 "base_bdevs_list": [ 00:10:04.988 { 00:10:04.988 "name": "BaseBdev1", 00:10:04.988 "uuid": "6776c4fa-3d73-4e53-a0e3-03a67bc2a1b8", 00:10:04.988 "is_configured": true, 00:10:04.988 "data_offset": 2048, 00:10:04.988 "data_size": 63488 00:10:04.988 }, 00:10:04.988 { 00:10:04.988 "name": "BaseBdev2", 00:10:04.988 "uuid": "75553ff3-18a8-4ba4-9743-bf7510839d2b", 00:10:04.988 "is_configured": true, 00:10:04.988 "data_offset": 2048, 00:10:04.988 "data_size": 63488 00:10:04.988 }, 00:10:04.988 { 00:10:04.988 "name": "BaseBdev3", 00:10:04.988 "uuid": "869e3c99-5a83-4cb9-81aa-e9ecdfb4d2a4", 00:10:04.988 "is_configured": true, 00:10:04.988 "data_offset": 2048, 00:10:04.988 "data_size": 63488 00:10:04.988 }, 00:10:04.988 { 00:10:04.988 "name": "BaseBdev4", 00:10:04.988 "uuid": "bd2ea8f2-8999-4382-93ea-05d5450ebd57", 00:10:04.988 "is_configured": true, 00:10:04.988 "data_offset": 2048, 00:10:04.988 "data_size": 63488 00:10:04.988 } 00:10:04.988 ] 00:10:04.988 }' 00:10:04.988 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.988 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.248 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:05.248 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:05.248 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:05.248 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:05.248 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:05.248 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:05.248 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:05.248 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:05.248 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.248 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.248 [2024-11-18 03:10:08.735938] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.248 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.248 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:05.248 "name": "Existed_Raid", 00:10:05.248 "aliases": [ 00:10:05.248 "57450a1b-ea88-49f8-8307-e92333540152" 00:10:05.248 ], 00:10:05.248 "product_name": "Raid Volume", 00:10:05.248 "block_size": 512, 00:10:05.248 "num_blocks": 253952, 00:10:05.248 "uuid": "57450a1b-ea88-49f8-8307-e92333540152", 00:10:05.248 "assigned_rate_limits": { 00:10:05.248 "rw_ios_per_sec": 0, 00:10:05.248 "rw_mbytes_per_sec": 0, 00:10:05.248 "r_mbytes_per_sec": 0, 00:10:05.248 "w_mbytes_per_sec": 0 00:10:05.248 }, 00:10:05.248 "claimed": false, 00:10:05.248 "zoned": false, 00:10:05.248 "supported_io_types": { 00:10:05.248 "read": true, 00:10:05.248 "write": true, 00:10:05.248 "unmap": true, 00:10:05.248 "flush": true, 00:10:05.248 "reset": true, 00:10:05.248 "nvme_admin": false, 00:10:05.248 "nvme_io": false, 00:10:05.248 "nvme_io_md": false, 00:10:05.248 "write_zeroes": true, 00:10:05.248 "zcopy": false, 00:10:05.248 "get_zone_info": false, 00:10:05.248 "zone_management": false, 00:10:05.248 "zone_append": false, 00:10:05.248 "compare": false, 00:10:05.248 "compare_and_write": false, 00:10:05.248 "abort": false, 00:10:05.248 "seek_hole": false, 00:10:05.248 "seek_data": false, 00:10:05.248 "copy": false, 00:10:05.248 "nvme_iov_md": false 00:10:05.248 }, 00:10:05.248 "memory_domains": [ 00:10:05.248 { 00:10:05.248 "dma_device_id": "system", 00:10:05.248 "dma_device_type": 1 00:10:05.248 }, 00:10:05.248 { 00:10:05.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.248 "dma_device_type": 2 00:10:05.248 }, 00:10:05.248 { 00:10:05.248 "dma_device_id": "system", 00:10:05.248 "dma_device_type": 1 00:10:05.248 }, 00:10:05.248 { 00:10:05.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.248 "dma_device_type": 2 00:10:05.248 }, 00:10:05.248 { 00:10:05.248 "dma_device_id": "system", 00:10:05.248 "dma_device_type": 1 00:10:05.248 }, 00:10:05.248 { 00:10:05.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.248 "dma_device_type": 2 00:10:05.248 }, 00:10:05.248 { 00:10:05.248 "dma_device_id": "system", 00:10:05.248 "dma_device_type": 1 00:10:05.248 }, 00:10:05.248 { 00:10:05.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.248 "dma_device_type": 2 00:10:05.248 } 00:10:05.248 ], 00:10:05.248 "driver_specific": { 00:10:05.248 "raid": { 00:10:05.248 "uuid": "57450a1b-ea88-49f8-8307-e92333540152", 00:10:05.248 "strip_size_kb": 64, 00:10:05.248 "state": "online", 00:10:05.248 "raid_level": "raid0", 00:10:05.248 "superblock": true, 00:10:05.248 "num_base_bdevs": 4, 00:10:05.248 "num_base_bdevs_discovered": 4, 00:10:05.248 "num_base_bdevs_operational": 4, 00:10:05.248 "base_bdevs_list": [ 00:10:05.248 { 00:10:05.248 "name": "BaseBdev1", 00:10:05.248 "uuid": "6776c4fa-3d73-4e53-a0e3-03a67bc2a1b8", 00:10:05.248 "is_configured": true, 00:10:05.249 "data_offset": 2048, 00:10:05.249 "data_size": 63488 00:10:05.249 }, 00:10:05.249 { 00:10:05.249 "name": "BaseBdev2", 00:10:05.249 "uuid": "75553ff3-18a8-4ba4-9743-bf7510839d2b", 00:10:05.249 "is_configured": true, 00:10:05.249 "data_offset": 2048, 00:10:05.249 "data_size": 63488 00:10:05.249 }, 00:10:05.249 { 00:10:05.249 "name": "BaseBdev3", 00:10:05.249 "uuid": "869e3c99-5a83-4cb9-81aa-e9ecdfb4d2a4", 00:10:05.249 "is_configured": true, 00:10:05.249 "data_offset": 2048, 00:10:05.249 "data_size": 63488 00:10:05.249 }, 00:10:05.249 { 00:10:05.249 "name": "BaseBdev4", 00:10:05.249 "uuid": "bd2ea8f2-8999-4382-93ea-05d5450ebd57", 00:10:05.249 "is_configured": true, 00:10:05.249 "data_offset": 2048, 00:10:05.249 "data_size": 63488 00:10:05.249 } 00:10:05.249 ] 00:10:05.249 } 00:10:05.249 } 00:10:05.249 }' 00:10:05.249 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:05.249 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:05.249 BaseBdev2 00:10:05.249 BaseBdev3 00:10:05.249 BaseBdev4' 00:10:05.249 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.508 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:05.508 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.508 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:05.508 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.509 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.509 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.509 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.509 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.509 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.509 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.509 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:05.509 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.509 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.509 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.509 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.509 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.509 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.509 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.509 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:05.509 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.509 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.509 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.509 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.509 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.509 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.509 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.509 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:05.509 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.509 03:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.509 03:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.509 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.509 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.509 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.509 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:05.509 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.509 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.509 [2024-11-18 03:10:09.031127] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:05.509 [2024-11-18 03:10:09.031204] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:05.509 [2024-11-18 03:10:09.031266] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:05.509 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.509 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:05.509 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:05.509 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:05.509 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:05.509 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:05.509 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:05.509 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.509 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:05.509 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.509 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.509 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.509 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.509 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.509 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.509 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.509 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.509 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.509 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.509 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.509 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.786 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.786 "name": "Existed_Raid", 00:10:05.786 "uuid": "57450a1b-ea88-49f8-8307-e92333540152", 00:10:05.786 "strip_size_kb": 64, 00:10:05.786 "state": "offline", 00:10:05.786 "raid_level": "raid0", 00:10:05.786 "superblock": true, 00:10:05.786 "num_base_bdevs": 4, 00:10:05.786 "num_base_bdevs_discovered": 3, 00:10:05.786 "num_base_bdevs_operational": 3, 00:10:05.786 "base_bdevs_list": [ 00:10:05.786 { 00:10:05.786 "name": null, 00:10:05.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.786 "is_configured": false, 00:10:05.786 "data_offset": 0, 00:10:05.786 "data_size": 63488 00:10:05.786 }, 00:10:05.786 { 00:10:05.786 "name": "BaseBdev2", 00:10:05.786 "uuid": "75553ff3-18a8-4ba4-9743-bf7510839d2b", 00:10:05.786 "is_configured": true, 00:10:05.786 "data_offset": 2048, 00:10:05.786 "data_size": 63488 00:10:05.786 }, 00:10:05.786 { 00:10:05.786 "name": "BaseBdev3", 00:10:05.786 "uuid": "869e3c99-5a83-4cb9-81aa-e9ecdfb4d2a4", 00:10:05.786 "is_configured": true, 00:10:05.786 "data_offset": 2048, 00:10:05.786 "data_size": 63488 00:10:05.786 }, 00:10:05.786 { 00:10:05.786 "name": "BaseBdev4", 00:10:05.786 "uuid": "bd2ea8f2-8999-4382-93ea-05d5450ebd57", 00:10:05.786 "is_configured": true, 00:10:05.786 "data_offset": 2048, 00:10:05.786 "data_size": 63488 00:10:05.786 } 00:10:05.786 ] 00:10:05.786 }' 00:10:05.786 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.786 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.046 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:06.046 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.046 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.046 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.046 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.047 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:06.047 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.047 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:06.047 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:06.047 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:06.047 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.047 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.047 [2024-11-18 03:10:09.497932] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:06.047 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.047 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:06.047 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.047 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:06.047 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.047 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.047 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.047 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.047 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:06.047 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:06.047 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:06.047 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.047 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.047 [2024-11-18 03:10:09.569291] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:06.047 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.047 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:06.047 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.047 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.047 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:06.047 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.047 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.047 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.307 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:06.307 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.308 [2024-11-18 03:10:09.640594] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:06.308 [2024-11-18 03:10:09.640687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.308 BaseBdev2 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.308 [ 00:10:06.308 { 00:10:06.308 "name": "BaseBdev2", 00:10:06.308 "aliases": [ 00:10:06.308 "e5cb7b4e-2b46-4ba6-b9b4-30710179f926" 00:10:06.308 ], 00:10:06.308 "product_name": "Malloc disk", 00:10:06.308 "block_size": 512, 00:10:06.308 "num_blocks": 65536, 00:10:06.308 "uuid": "e5cb7b4e-2b46-4ba6-b9b4-30710179f926", 00:10:06.308 "assigned_rate_limits": { 00:10:06.308 "rw_ios_per_sec": 0, 00:10:06.308 "rw_mbytes_per_sec": 0, 00:10:06.308 "r_mbytes_per_sec": 0, 00:10:06.308 "w_mbytes_per_sec": 0 00:10:06.308 }, 00:10:06.308 "claimed": false, 00:10:06.308 "zoned": false, 00:10:06.308 "supported_io_types": { 00:10:06.308 "read": true, 00:10:06.308 "write": true, 00:10:06.308 "unmap": true, 00:10:06.308 "flush": true, 00:10:06.308 "reset": true, 00:10:06.308 "nvme_admin": false, 00:10:06.308 "nvme_io": false, 00:10:06.308 "nvme_io_md": false, 00:10:06.308 "write_zeroes": true, 00:10:06.308 "zcopy": true, 00:10:06.308 "get_zone_info": false, 00:10:06.308 "zone_management": false, 00:10:06.308 "zone_append": false, 00:10:06.308 "compare": false, 00:10:06.308 "compare_and_write": false, 00:10:06.308 "abort": true, 00:10:06.308 "seek_hole": false, 00:10:06.308 "seek_data": false, 00:10:06.308 "copy": true, 00:10:06.308 "nvme_iov_md": false 00:10:06.308 }, 00:10:06.308 "memory_domains": [ 00:10:06.308 { 00:10:06.308 "dma_device_id": "system", 00:10:06.308 "dma_device_type": 1 00:10:06.308 }, 00:10:06.308 { 00:10:06.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.308 "dma_device_type": 2 00:10:06.308 } 00:10:06.308 ], 00:10:06.308 "driver_specific": {} 00:10:06.308 } 00:10:06.308 ] 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.308 BaseBdev3 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.308 [ 00:10:06.308 { 00:10:06.308 "name": "BaseBdev3", 00:10:06.308 "aliases": [ 00:10:06.308 "23a588a0-4ddb-4dde-815e-4527f90b7048" 00:10:06.308 ], 00:10:06.308 "product_name": "Malloc disk", 00:10:06.308 "block_size": 512, 00:10:06.308 "num_blocks": 65536, 00:10:06.308 "uuid": "23a588a0-4ddb-4dde-815e-4527f90b7048", 00:10:06.308 "assigned_rate_limits": { 00:10:06.308 "rw_ios_per_sec": 0, 00:10:06.308 "rw_mbytes_per_sec": 0, 00:10:06.308 "r_mbytes_per_sec": 0, 00:10:06.308 "w_mbytes_per_sec": 0 00:10:06.308 }, 00:10:06.308 "claimed": false, 00:10:06.308 "zoned": false, 00:10:06.308 "supported_io_types": { 00:10:06.308 "read": true, 00:10:06.308 "write": true, 00:10:06.308 "unmap": true, 00:10:06.308 "flush": true, 00:10:06.308 "reset": true, 00:10:06.308 "nvme_admin": false, 00:10:06.308 "nvme_io": false, 00:10:06.308 "nvme_io_md": false, 00:10:06.308 "write_zeroes": true, 00:10:06.308 "zcopy": true, 00:10:06.308 "get_zone_info": false, 00:10:06.308 "zone_management": false, 00:10:06.308 "zone_append": false, 00:10:06.308 "compare": false, 00:10:06.308 "compare_and_write": false, 00:10:06.308 "abort": true, 00:10:06.308 "seek_hole": false, 00:10:06.308 "seek_data": false, 00:10:06.308 "copy": true, 00:10:06.308 "nvme_iov_md": false 00:10:06.308 }, 00:10:06.308 "memory_domains": [ 00:10:06.308 { 00:10:06.308 "dma_device_id": "system", 00:10:06.308 "dma_device_type": 1 00:10:06.308 }, 00:10:06.308 { 00:10:06.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.308 "dma_device_type": 2 00:10:06.308 } 00:10:06.308 ], 00:10:06.308 "driver_specific": {} 00:10:06.308 } 00:10:06.308 ] 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.308 BaseBdev4 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:06.308 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.309 [ 00:10:06.309 { 00:10:06.309 "name": "BaseBdev4", 00:10:06.309 "aliases": [ 00:10:06.309 "31a87c43-c458-4cdc-bd5e-83b117f9baae" 00:10:06.309 ], 00:10:06.309 "product_name": "Malloc disk", 00:10:06.309 "block_size": 512, 00:10:06.309 "num_blocks": 65536, 00:10:06.309 "uuid": "31a87c43-c458-4cdc-bd5e-83b117f9baae", 00:10:06.309 "assigned_rate_limits": { 00:10:06.309 "rw_ios_per_sec": 0, 00:10:06.309 "rw_mbytes_per_sec": 0, 00:10:06.309 "r_mbytes_per_sec": 0, 00:10:06.309 "w_mbytes_per_sec": 0 00:10:06.309 }, 00:10:06.309 "claimed": false, 00:10:06.309 "zoned": false, 00:10:06.309 "supported_io_types": { 00:10:06.309 "read": true, 00:10:06.309 "write": true, 00:10:06.309 "unmap": true, 00:10:06.309 "flush": true, 00:10:06.309 "reset": true, 00:10:06.309 "nvme_admin": false, 00:10:06.309 "nvme_io": false, 00:10:06.309 "nvme_io_md": false, 00:10:06.309 "write_zeroes": true, 00:10:06.309 "zcopy": true, 00:10:06.309 "get_zone_info": false, 00:10:06.309 "zone_management": false, 00:10:06.309 "zone_append": false, 00:10:06.309 "compare": false, 00:10:06.309 "compare_and_write": false, 00:10:06.309 "abort": true, 00:10:06.309 "seek_hole": false, 00:10:06.309 "seek_data": false, 00:10:06.309 "copy": true, 00:10:06.309 "nvme_iov_md": false 00:10:06.309 }, 00:10:06.309 "memory_domains": [ 00:10:06.309 { 00:10:06.309 "dma_device_id": "system", 00:10:06.309 "dma_device_type": 1 00:10:06.309 }, 00:10:06.309 { 00:10:06.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.309 "dma_device_type": 2 00:10:06.309 } 00:10:06.309 ], 00:10:06.309 "driver_specific": {} 00:10:06.309 } 00:10:06.309 ] 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.309 [2024-11-18 03:10:09.854711] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:06.309 [2024-11-18 03:10:09.854756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:06.309 [2024-11-18 03:10:09.854778] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.309 [2024-11-18 03:10:09.856749] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:06.309 [2024-11-18 03:10:09.856804] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.309 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.569 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.569 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.569 "name": "Existed_Raid", 00:10:06.569 "uuid": "d9996e27-e28b-41e0-9e26-6ed1d8334a35", 00:10:06.569 "strip_size_kb": 64, 00:10:06.569 "state": "configuring", 00:10:06.569 "raid_level": "raid0", 00:10:06.569 "superblock": true, 00:10:06.569 "num_base_bdevs": 4, 00:10:06.569 "num_base_bdevs_discovered": 3, 00:10:06.569 "num_base_bdevs_operational": 4, 00:10:06.569 "base_bdevs_list": [ 00:10:06.569 { 00:10:06.569 "name": "BaseBdev1", 00:10:06.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.569 "is_configured": false, 00:10:06.569 "data_offset": 0, 00:10:06.569 "data_size": 0 00:10:06.569 }, 00:10:06.569 { 00:10:06.569 "name": "BaseBdev2", 00:10:06.569 "uuid": "e5cb7b4e-2b46-4ba6-b9b4-30710179f926", 00:10:06.569 "is_configured": true, 00:10:06.569 "data_offset": 2048, 00:10:06.569 "data_size": 63488 00:10:06.569 }, 00:10:06.569 { 00:10:06.569 "name": "BaseBdev3", 00:10:06.569 "uuid": "23a588a0-4ddb-4dde-815e-4527f90b7048", 00:10:06.569 "is_configured": true, 00:10:06.569 "data_offset": 2048, 00:10:06.569 "data_size": 63488 00:10:06.569 }, 00:10:06.569 { 00:10:06.569 "name": "BaseBdev4", 00:10:06.569 "uuid": "31a87c43-c458-4cdc-bd5e-83b117f9baae", 00:10:06.569 "is_configured": true, 00:10:06.569 "data_offset": 2048, 00:10:06.569 "data_size": 63488 00:10:06.569 } 00:10:06.569 ] 00:10:06.569 }' 00:10:06.569 03:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.569 03:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.829 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:06.829 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.829 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.829 [2024-11-18 03:10:10.297975] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:06.829 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.829 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:06.829 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.829 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.829 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.829 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.829 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.829 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.829 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.829 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.830 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.830 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.830 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.830 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.830 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.830 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.830 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.830 "name": "Existed_Raid", 00:10:06.830 "uuid": "d9996e27-e28b-41e0-9e26-6ed1d8334a35", 00:10:06.830 "strip_size_kb": 64, 00:10:06.830 "state": "configuring", 00:10:06.830 "raid_level": "raid0", 00:10:06.830 "superblock": true, 00:10:06.830 "num_base_bdevs": 4, 00:10:06.830 "num_base_bdevs_discovered": 2, 00:10:06.830 "num_base_bdevs_operational": 4, 00:10:06.830 "base_bdevs_list": [ 00:10:06.830 { 00:10:06.830 "name": "BaseBdev1", 00:10:06.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.830 "is_configured": false, 00:10:06.830 "data_offset": 0, 00:10:06.830 "data_size": 0 00:10:06.830 }, 00:10:06.830 { 00:10:06.830 "name": null, 00:10:06.830 "uuid": "e5cb7b4e-2b46-4ba6-b9b4-30710179f926", 00:10:06.830 "is_configured": false, 00:10:06.830 "data_offset": 0, 00:10:06.830 "data_size": 63488 00:10:06.830 }, 00:10:06.830 { 00:10:06.830 "name": "BaseBdev3", 00:10:06.830 "uuid": "23a588a0-4ddb-4dde-815e-4527f90b7048", 00:10:06.830 "is_configured": true, 00:10:06.830 "data_offset": 2048, 00:10:06.830 "data_size": 63488 00:10:06.830 }, 00:10:06.830 { 00:10:06.830 "name": "BaseBdev4", 00:10:06.830 "uuid": "31a87c43-c458-4cdc-bd5e-83b117f9baae", 00:10:06.830 "is_configured": true, 00:10:06.830 "data_offset": 2048, 00:10:06.830 "data_size": 63488 00:10:06.830 } 00:10:06.830 ] 00:10:06.830 }' 00:10:06.830 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.830 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.400 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:07.400 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.400 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.400 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.400 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.400 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.401 [2024-11-18 03:10:10.788249] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:07.401 BaseBdev1 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.401 [ 00:10:07.401 { 00:10:07.401 "name": "BaseBdev1", 00:10:07.401 "aliases": [ 00:10:07.401 "3f5038e6-18d3-48e9-bbbf-a05afbaaa9e6" 00:10:07.401 ], 00:10:07.401 "product_name": "Malloc disk", 00:10:07.401 "block_size": 512, 00:10:07.401 "num_blocks": 65536, 00:10:07.401 "uuid": "3f5038e6-18d3-48e9-bbbf-a05afbaaa9e6", 00:10:07.401 "assigned_rate_limits": { 00:10:07.401 "rw_ios_per_sec": 0, 00:10:07.401 "rw_mbytes_per_sec": 0, 00:10:07.401 "r_mbytes_per_sec": 0, 00:10:07.401 "w_mbytes_per_sec": 0 00:10:07.401 }, 00:10:07.401 "claimed": true, 00:10:07.401 "claim_type": "exclusive_write", 00:10:07.401 "zoned": false, 00:10:07.401 "supported_io_types": { 00:10:07.401 "read": true, 00:10:07.401 "write": true, 00:10:07.401 "unmap": true, 00:10:07.401 "flush": true, 00:10:07.401 "reset": true, 00:10:07.401 "nvme_admin": false, 00:10:07.401 "nvme_io": false, 00:10:07.401 "nvme_io_md": false, 00:10:07.401 "write_zeroes": true, 00:10:07.401 "zcopy": true, 00:10:07.401 "get_zone_info": false, 00:10:07.401 "zone_management": false, 00:10:07.401 "zone_append": false, 00:10:07.401 "compare": false, 00:10:07.401 "compare_and_write": false, 00:10:07.401 "abort": true, 00:10:07.401 "seek_hole": false, 00:10:07.401 "seek_data": false, 00:10:07.401 "copy": true, 00:10:07.401 "nvme_iov_md": false 00:10:07.401 }, 00:10:07.401 "memory_domains": [ 00:10:07.401 { 00:10:07.401 "dma_device_id": "system", 00:10:07.401 "dma_device_type": 1 00:10:07.401 }, 00:10:07.401 { 00:10:07.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.401 "dma_device_type": 2 00:10:07.401 } 00:10:07.401 ], 00:10:07.401 "driver_specific": {} 00:10:07.401 } 00:10:07.401 ] 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.401 "name": "Existed_Raid", 00:10:07.401 "uuid": "d9996e27-e28b-41e0-9e26-6ed1d8334a35", 00:10:07.401 "strip_size_kb": 64, 00:10:07.401 "state": "configuring", 00:10:07.401 "raid_level": "raid0", 00:10:07.401 "superblock": true, 00:10:07.401 "num_base_bdevs": 4, 00:10:07.401 "num_base_bdevs_discovered": 3, 00:10:07.401 "num_base_bdevs_operational": 4, 00:10:07.401 "base_bdevs_list": [ 00:10:07.401 { 00:10:07.401 "name": "BaseBdev1", 00:10:07.401 "uuid": "3f5038e6-18d3-48e9-bbbf-a05afbaaa9e6", 00:10:07.401 "is_configured": true, 00:10:07.401 "data_offset": 2048, 00:10:07.401 "data_size": 63488 00:10:07.401 }, 00:10:07.401 { 00:10:07.401 "name": null, 00:10:07.401 "uuid": "e5cb7b4e-2b46-4ba6-b9b4-30710179f926", 00:10:07.401 "is_configured": false, 00:10:07.401 "data_offset": 0, 00:10:07.401 "data_size": 63488 00:10:07.401 }, 00:10:07.401 { 00:10:07.401 "name": "BaseBdev3", 00:10:07.401 "uuid": "23a588a0-4ddb-4dde-815e-4527f90b7048", 00:10:07.401 "is_configured": true, 00:10:07.401 "data_offset": 2048, 00:10:07.401 "data_size": 63488 00:10:07.401 }, 00:10:07.401 { 00:10:07.401 "name": "BaseBdev4", 00:10:07.401 "uuid": "31a87c43-c458-4cdc-bd5e-83b117f9baae", 00:10:07.401 "is_configured": true, 00:10:07.401 "data_offset": 2048, 00:10:07.401 "data_size": 63488 00:10:07.401 } 00:10:07.401 ] 00:10:07.401 }' 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.401 03:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.972 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:07.972 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.972 03:10:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.972 03:10:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.972 03:10:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.972 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:07.972 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:07.972 03:10:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.972 03:10:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.972 [2024-11-18 03:10:11.343337] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:07.972 03:10:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.972 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:07.972 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.972 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.972 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.972 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.972 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.972 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.972 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.972 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.972 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.972 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.972 03:10:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.972 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.972 03:10:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.972 03:10:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.972 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.972 "name": "Existed_Raid", 00:10:07.972 "uuid": "d9996e27-e28b-41e0-9e26-6ed1d8334a35", 00:10:07.972 "strip_size_kb": 64, 00:10:07.972 "state": "configuring", 00:10:07.972 "raid_level": "raid0", 00:10:07.972 "superblock": true, 00:10:07.972 "num_base_bdevs": 4, 00:10:07.972 "num_base_bdevs_discovered": 2, 00:10:07.972 "num_base_bdevs_operational": 4, 00:10:07.973 "base_bdevs_list": [ 00:10:07.973 { 00:10:07.973 "name": "BaseBdev1", 00:10:07.973 "uuid": "3f5038e6-18d3-48e9-bbbf-a05afbaaa9e6", 00:10:07.973 "is_configured": true, 00:10:07.973 "data_offset": 2048, 00:10:07.973 "data_size": 63488 00:10:07.973 }, 00:10:07.973 { 00:10:07.973 "name": null, 00:10:07.973 "uuid": "e5cb7b4e-2b46-4ba6-b9b4-30710179f926", 00:10:07.973 "is_configured": false, 00:10:07.973 "data_offset": 0, 00:10:07.973 "data_size": 63488 00:10:07.973 }, 00:10:07.973 { 00:10:07.973 "name": null, 00:10:07.973 "uuid": "23a588a0-4ddb-4dde-815e-4527f90b7048", 00:10:07.973 "is_configured": false, 00:10:07.973 "data_offset": 0, 00:10:07.973 "data_size": 63488 00:10:07.973 }, 00:10:07.973 { 00:10:07.973 "name": "BaseBdev4", 00:10:07.973 "uuid": "31a87c43-c458-4cdc-bd5e-83b117f9baae", 00:10:07.973 "is_configured": true, 00:10:07.973 "data_offset": 2048, 00:10:07.973 "data_size": 63488 00:10:07.973 } 00:10:07.973 ] 00:10:07.973 }' 00:10:07.973 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.973 03:10:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.233 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:08.233 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.233 03:10:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.233 03:10:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.233 03:10:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.233 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:08.233 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:08.233 03:10:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.233 03:10:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.233 [2024-11-18 03:10:11.806606] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:08.493 03:10:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.493 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:08.493 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.493 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.493 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.493 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.493 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.493 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.493 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.493 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.493 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.493 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.493 03:10:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.493 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.493 03:10:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.493 03:10:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.493 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.493 "name": "Existed_Raid", 00:10:08.493 "uuid": "d9996e27-e28b-41e0-9e26-6ed1d8334a35", 00:10:08.493 "strip_size_kb": 64, 00:10:08.493 "state": "configuring", 00:10:08.493 "raid_level": "raid0", 00:10:08.493 "superblock": true, 00:10:08.493 "num_base_bdevs": 4, 00:10:08.493 "num_base_bdevs_discovered": 3, 00:10:08.493 "num_base_bdevs_operational": 4, 00:10:08.493 "base_bdevs_list": [ 00:10:08.493 { 00:10:08.493 "name": "BaseBdev1", 00:10:08.493 "uuid": "3f5038e6-18d3-48e9-bbbf-a05afbaaa9e6", 00:10:08.493 "is_configured": true, 00:10:08.493 "data_offset": 2048, 00:10:08.493 "data_size": 63488 00:10:08.493 }, 00:10:08.493 { 00:10:08.493 "name": null, 00:10:08.493 "uuid": "e5cb7b4e-2b46-4ba6-b9b4-30710179f926", 00:10:08.493 "is_configured": false, 00:10:08.493 "data_offset": 0, 00:10:08.493 "data_size": 63488 00:10:08.493 }, 00:10:08.493 { 00:10:08.493 "name": "BaseBdev3", 00:10:08.493 "uuid": "23a588a0-4ddb-4dde-815e-4527f90b7048", 00:10:08.493 "is_configured": true, 00:10:08.493 "data_offset": 2048, 00:10:08.493 "data_size": 63488 00:10:08.493 }, 00:10:08.493 { 00:10:08.493 "name": "BaseBdev4", 00:10:08.493 "uuid": "31a87c43-c458-4cdc-bd5e-83b117f9baae", 00:10:08.493 "is_configured": true, 00:10:08.493 "data_offset": 2048, 00:10:08.493 "data_size": 63488 00:10:08.493 } 00:10:08.493 ] 00:10:08.493 }' 00:10:08.494 03:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.494 03:10:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.754 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.754 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:08.754 03:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.754 03:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.754 03:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.754 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:08.754 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:08.754 03:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.754 03:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.754 [2024-11-18 03:10:12.281823] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:08.754 03:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.754 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:08.754 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.754 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.754 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.754 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.754 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.754 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.754 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.754 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.754 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.754 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.754 03:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.754 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.754 03:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.754 03:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.014 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.014 "name": "Existed_Raid", 00:10:09.014 "uuid": "d9996e27-e28b-41e0-9e26-6ed1d8334a35", 00:10:09.014 "strip_size_kb": 64, 00:10:09.014 "state": "configuring", 00:10:09.014 "raid_level": "raid0", 00:10:09.014 "superblock": true, 00:10:09.014 "num_base_bdevs": 4, 00:10:09.014 "num_base_bdevs_discovered": 2, 00:10:09.014 "num_base_bdevs_operational": 4, 00:10:09.014 "base_bdevs_list": [ 00:10:09.014 { 00:10:09.014 "name": null, 00:10:09.014 "uuid": "3f5038e6-18d3-48e9-bbbf-a05afbaaa9e6", 00:10:09.014 "is_configured": false, 00:10:09.014 "data_offset": 0, 00:10:09.014 "data_size": 63488 00:10:09.014 }, 00:10:09.014 { 00:10:09.014 "name": null, 00:10:09.014 "uuid": "e5cb7b4e-2b46-4ba6-b9b4-30710179f926", 00:10:09.014 "is_configured": false, 00:10:09.014 "data_offset": 0, 00:10:09.014 "data_size": 63488 00:10:09.014 }, 00:10:09.014 { 00:10:09.014 "name": "BaseBdev3", 00:10:09.014 "uuid": "23a588a0-4ddb-4dde-815e-4527f90b7048", 00:10:09.014 "is_configured": true, 00:10:09.014 "data_offset": 2048, 00:10:09.014 "data_size": 63488 00:10:09.014 }, 00:10:09.014 { 00:10:09.014 "name": "BaseBdev4", 00:10:09.014 "uuid": "31a87c43-c458-4cdc-bd5e-83b117f9baae", 00:10:09.014 "is_configured": true, 00:10:09.014 "data_offset": 2048, 00:10:09.014 "data_size": 63488 00:10:09.014 } 00:10:09.014 ] 00:10:09.014 }' 00:10:09.014 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.014 03:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.275 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.275 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:09.275 03:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.275 03:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.275 03:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.275 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:09.275 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:09.275 03:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.275 03:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.275 [2024-11-18 03:10:12.803712] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.275 03:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.275 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:09.275 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.275 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.275 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.275 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.275 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.275 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.275 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.275 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.275 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.275 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.275 03:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.275 03:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.275 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.275 03:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.535 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.535 "name": "Existed_Raid", 00:10:09.535 "uuid": "d9996e27-e28b-41e0-9e26-6ed1d8334a35", 00:10:09.535 "strip_size_kb": 64, 00:10:09.535 "state": "configuring", 00:10:09.535 "raid_level": "raid0", 00:10:09.535 "superblock": true, 00:10:09.535 "num_base_bdevs": 4, 00:10:09.535 "num_base_bdevs_discovered": 3, 00:10:09.535 "num_base_bdevs_operational": 4, 00:10:09.535 "base_bdevs_list": [ 00:10:09.535 { 00:10:09.535 "name": null, 00:10:09.535 "uuid": "3f5038e6-18d3-48e9-bbbf-a05afbaaa9e6", 00:10:09.535 "is_configured": false, 00:10:09.535 "data_offset": 0, 00:10:09.535 "data_size": 63488 00:10:09.535 }, 00:10:09.535 { 00:10:09.535 "name": "BaseBdev2", 00:10:09.535 "uuid": "e5cb7b4e-2b46-4ba6-b9b4-30710179f926", 00:10:09.535 "is_configured": true, 00:10:09.535 "data_offset": 2048, 00:10:09.535 "data_size": 63488 00:10:09.535 }, 00:10:09.535 { 00:10:09.535 "name": "BaseBdev3", 00:10:09.535 "uuid": "23a588a0-4ddb-4dde-815e-4527f90b7048", 00:10:09.535 "is_configured": true, 00:10:09.535 "data_offset": 2048, 00:10:09.535 "data_size": 63488 00:10:09.535 }, 00:10:09.535 { 00:10:09.535 "name": "BaseBdev4", 00:10:09.535 "uuid": "31a87c43-c458-4cdc-bd5e-83b117f9baae", 00:10:09.535 "is_configured": true, 00:10:09.535 "data_offset": 2048, 00:10:09.535 "data_size": 63488 00:10:09.535 } 00:10:09.535 ] 00:10:09.535 }' 00:10:09.535 03:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.535 03:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3f5038e6-18d3-48e9-bbbf-a05afbaaa9e6 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.796 [2024-11-18 03:10:13.302121] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:09.796 [2024-11-18 03:10:13.302317] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:09.796 [2024-11-18 03:10:13.302330] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:09.796 [2024-11-18 03:10:13.302611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:09.796 [2024-11-18 03:10:13.302736] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:09.796 [2024-11-18 03:10:13.302754] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:09.796 [2024-11-18 03:10:13.302857] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.796 NewBaseBdev 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.796 [ 00:10:09.796 { 00:10:09.796 "name": "NewBaseBdev", 00:10:09.796 "aliases": [ 00:10:09.796 "3f5038e6-18d3-48e9-bbbf-a05afbaaa9e6" 00:10:09.796 ], 00:10:09.796 "product_name": "Malloc disk", 00:10:09.796 "block_size": 512, 00:10:09.796 "num_blocks": 65536, 00:10:09.796 "uuid": "3f5038e6-18d3-48e9-bbbf-a05afbaaa9e6", 00:10:09.796 "assigned_rate_limits": { 00:10:09.796 "rw_ios_per_sec": 0, 00:10:09.796 "rw_mbytes_per_sec": 0, 00:10:09.796 "r_mbytes_per_sec": 0, 00:10:09.796 "w_mbytes_per_sec": 0 00:10:09.796 }, 00:10:09.796 "claimed": true, 00:10:09.796 "claim_type": "exclusive_write", 00:10:09.796 "zoned": false, 00:10:09.796 "supported_io_types": { 00:10:09.796 "read": true, 00:10:09.796 "write": true, 00:10:09.796 "unmap": true, 00:10:09.796 "flush": true, 00:10:09.796 "reset": true, 00:10:09.796 "nvme_admin": false, 00:10:09.796 "nvme_io": false, 00:10:09.796 "nvme_io_md": false, 00:10:09.796 "write_zeroes": true, 00:10:09.796 "zcopy": true, 00:10:09.796 "get_zone_info": false, 00:10:09.796 "zone_management": false, 00:10:09.796 "zone_append": false, 00:10:09.796 "compare": false, 00:10:09.796 "compare_and_write": false, 00:10:09.796 "abort": true, 00:10:09.796 "seek_hole": false, 00:10:09.796 "seek_data": false, 00:10:09.796 "copy": true, 00:10:09.796 "nvme_iov_md": false 00:10:09.796 }, 00:10:09.796 "memory_domains": [ 00:10:09.796 { 00:10:09.796 "dma_device_id": "system", 00:10:09.796 "dma_device_type": 1 00:10:09.796 }, 00:10:09.796 { 00:10:09.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.796 "dma_device_type": 2 00:10:09.796 } 00:10:09.796 ], 00:10:09.796 "driver_specific": {} 00:10:09.796 } 00:10:09.796 ] 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.796 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.057 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.057 "name": "Existed_Raid", 00:10:10.057 "uuid": "d9996e27-e28b-41e0-9e26-6ed1d8334a35", 00:10:10.057 "strip_size_kb": 64, 00:10:10.057 "state": "online", 00:10:10.057 "raid_level": "raid0", 00:10:10.057 "superblock": true, 00:10:10.057 "num_base_bdevs": 4, 00:10:10.057 "num_base_bdevs_discovered": 4, 00:10:10.057 "num_base_bdevs_operational": 4, 00:10:10.057 "base_bdevs_list": [ 00:10:10.057 { 00:10:10.057 "name": "NewBaseBdev", 00:10:10.057 "uuid": "3f5038e6-18d3-48e9-bbbf-a05afbaaa9e6", 00:10:10.057 "is_configured": true, 00:10:10.057 "data_offset": 2048, 00:10:10.057 "data_size": 63488 00:10:10.057 }, 00:10:10.057 { 00:10:10.057 "name": "BaseBdev2", 00:10:10.058 "uuid": "e5cb7b4e-2b46-4ba6-b9b4-30710179f926", 00:10:10.058 "is_configured": true, 00:10:10.058 "data_offset": 2048, 00:10:10.058 "data_size": 63488 00:10:10.058 }, 00:10:10.058 { 00:10:10.058 "name": "BaseBdev3", 00:10:10.058 "uuid": "23a588a0-4ddb-4dde-815e-4527f90b7048", 00:10:10.058 "is_configured": true, 00:10:10.058 "data_offset": 2048, 00:10:10.058 "data_size": 63488 00:10:10.058 }, 00:10:10.058 { 00:10:10.058 "name": "BaseBdev4", 00:10:10.058 "uuid": "31a87c43-c458-4cdc-bd5e-83b117f9baae", 00:10:10.058 "is_configured": true, 00:10:10.058 "data_offset": 2048, 00:10:10.058 "data_size": 63488 00:10:10.058 } 00:10:10.058 ] 00:10:10.058 }' 00:10:10.058 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.058 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.320 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:10.320 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:10.320 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:10.320 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:10.320 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:10.320 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:10.320 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:10.320 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.320 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.320 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:10.320 [2024-11-18 03:10:13.761746] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.320 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.320 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:10.320 "name": "Existed_Raid", 00:10:10.320 "aliases": [ 00:10:10.320 "d9996e27-e28b-41e0-9e26-6ed1d8334a35" 00:10:10.320 ], 00:10:10.320 "product_name": "Raid Volume", 00:10:10.320 "block_size": 512, 00:10:10.320 "num_blocks": 253952, 00:10:10.320 "uuid": "d9996e27-e28b-41e0-9e26-6ed1d8334a35", 00:10:10.320 "assigned_rate_limits": { 00:10:10.320 "rw_ios_per_sec": 0, 00:10:10.320 "rw_mbytes_per_sec": 0, 00:10:10.320 "r_mbytes_per_sec": 0, 00:10:10.320 "w_mbytes_per_sec": 0 00:10:10.320 }, 00:10:10.320 "claimed": false, 00:10:10.320 "zoned": false, 00:10:10.320 "supported_io_types": { 00:10:10.320 "read": true, 00:10:10.320 "write": true, 00:10:10.320 "unmap": true, 00:10:10.320 "flush": true, 00:10:10.320 "reset": true, 00:10:10.320 "nvme_admin": false, 00:10:10.320 "nvme_io": false, 00:10:10.320 "nvme_io_md": false, 00:10:10.320 "write_zeroes": true, 00:10:10.320 "zcopy": false, 00:10:10.320 "get_zone_info": false, 00:10:10.320 "zone_management": false, 00:10:10.320 "zone_append": false, 00:10:10.320 "compare": false, 00:10:10.320 "compare_and_write": false, 00:10:10.320 "abort": false, 00:10:10.320 "seek_hole": false, 00:10:10.320 "seek_data": false, 00:10:10.320 "copy": false, 00:10:10.320 "nvme_iov_md": false 00:10:10.320 }, 00:10:10.320 "memory_domains": [ 00:10:10.320 { 00:10:10.320 "dma_device_id": "system", 00:10:10.320 "dma_device_type": 1 00:10:10.320 }, 00:10:10.320 { 00:10:10.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.320 "dma_device_type": 2 00:10:10.320 }, 00:10:10.320 { 00:10:10.320 "dma_device_id": "system", 00:10:10.320 "dma_device_type": 1 00:10:10.320 }, 00:10:10.320 { 00:10:10.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.320 "dma_device_type": 2 00:10:10.320 }, 00:10:10.320 { 00:10:10.320 "dma_device_id": "system", 00:10:10.320 "dma_device_type": 1 00:10:10.320 }, 00:10:10.320 { 00:10:10.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.320 "dma_device_type": 2 00:10:10.320 }, 00:10:10.320 { 00:10:10.320 "dma_device_id": "system", 00:10:10.320 "dma_device_type": 1 00:10:10.320 }, 00:10:10.320 { 00:10:10.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.320 "dma_device_type": 2 00:10:10.320 } 00:10:10.320 ], 00:10:10.320 "driver_specific": { 00:10:10.320 "raid": { 00:10:10.320 "uuid": "d9996e27-e28b-41e0-9e26-6ed1d8334a35", 00:10:10.320 "strip_size_kb": 64, 00:10:10.320 "state": "online", 00:10:10.320 "raid_level": "raid0", 00:10:10.320 "superblock": true, 00:10:10.320 "num_base_bdevs": 4, 00:10:10.320 "num_base_bdevs_discovered": 4, 00:10:10.320 "num_base_bdevs_operational": 4, 00:10:10.320 "base_bdevs_list": [ 00:10:10.320 { 00:10:10.320 "name": "NewBaseBdev", 00:10:10.320 "uuid": "3f5038e6-18d3-48e9-bbbf-a05afbaaa9e6", 00:10:10.321 "is_configured": true, 00:10:10.321 "data_offset": 2048, 00:10:10.321 "data_size": 63488 00:10:10.321 }, 00:10:10.321 { 00:10:10.321 "name": "BaseBdev2", 00:10:10.321 "uuid": "e5cb7b4e-2b46-4ba6-b9b4-30710179f926", 00:10:10.321 "is_configured": true, 00:10:10.321 "data_offset": 2048, 00:10:10.321 "data_size": 63488 00:10:10.321 }, 00:10:10.321 { 00:10:10.321 "name": "BaseBdev3", 00:10:10.321 "uuid": "23a588a0-4ddb-4dde-815e-4527f90b7048", 00:10:10.321 "is_configured": true, 00:10:10.321 "data_offset": 2048, 00:10:10.321 "data_size": 63488 00:10:10.321 }, 00:10:10.321 { 00:10:10.321 "name": "BaseBdev4", 00:10:10.321 "uuid": "31a87c43-c458-4cdc-bd5e-83b117f9baae", 00:10:10.321 "is_configured": true, 00:10:10.321 "data_offset": 2048, 00:10:10.321 "data_size": 63488 00:10:10.321 } 00:10:10.321 ] 00:10:10.321 } 00:10:10.321 } 00:10:10.321 }' 00:10:10.321 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:10.321 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:10.321 BaseBdev2 00:10:10.321 BaseBdev3 00:10:10.321 BaseBdev4' 00:10:10.321 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.596 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:10.596 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.596 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.596 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:10.596 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.596 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.596 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.596 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.596 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.596 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.596 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:10.596 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.596 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.596 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.596 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.596 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.596 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.596 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.596 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.596 03:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:10.596 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.596 03:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.596 03:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.596 03:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.596 03:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.596 03:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.596 03:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.596 03:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:10.596 03:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.596 03:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.597 03:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.597 03:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.597 03:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.597 03:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:10.597 03:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.597 03:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.597 [2024-11-18 03:10:14.052934] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:10.597 [2024-11-18 03:10:14.052977] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:10.597 [2024-11-18 03:10:14.053052] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.597 [2024-11-18 03:10:14.053119] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.597 [2024-11-18 03:10:14.053129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:10.597 03:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.597 03:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81154 00:10:10.597 03:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 81154 ']' 00:10:10.597 03:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 81154 00:10:10.597 03:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:10.597 03:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:10.597 03:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81154 00:10:10.597 killing process with pid 81154 00:10:10.597 03:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:10.597 03:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:10.597 03:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81154' 00:10:10.597 03:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 81154 00:10:10.597 [2024-11-18 03:10:14.104015] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:10.597 03:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 81154 00:10:10.597 [2024-11-18 03:10:14.145498] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:10.873 ************************************ 00:10:10.873 END TEST raid_state_function_test_sb 00:10:10.873 ************************************ 00:10:10.873 03:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:10.873 00:10:10.873 real 0m9.494s 00:10:10.873 user 0m16.317s 00:10:10.873 sys 0m1.938s 00:10:10.873 03:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.873 03:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.873 03:10:14 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:10.873 03:10:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:10.873 03:10:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:10.873 03:10:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:11.133 ************************************ 00:10:11.133 START TEST raid_superblock_test 00:10:11.133 ************************************ 00:10:11.133 03:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:10:11.133 03:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:11.133 03:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:11.133 03:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:11.133 03:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:11.133 03:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:11.133 03:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:11.133 03:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:11.133 03:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:11.133 03:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:11.133 03:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:11.133 03:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:11.133 03:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:11.133 03:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:11.133 03:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:11.133 03:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:11.133 03:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:11.133 03:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81808 00:10:11.133 03:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:11.133 03:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81808 00:10:11.133 03:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81808 ']' 00:10:11.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.133 03:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.133 03:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:11.133 03:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.133 03:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:11.133 03:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.133 [2024-11-18 03:10:14.543319] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:11.133 [2024-11-18 03:10:14.543455] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81808 ] 00:10:11.133 [2024-11-18 03:10:14.701857] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.394 [2024-11-18 03:10:14.752113] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.394 [2024-11-18 03:10:14.795074] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.394 [2024-11-18 03:10:14.795117] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.965 malloc1 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.965 [2024-11-18 03:10:15.401906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:11.965 [2024-11-18 03:10:15.402054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.965 [2024-11-18 03:10:15.402096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:11.965 [2024-11-18 03:10:15.402132] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.965 [2024-11-18 03:10:15.404303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.965 [2024-11-18 03:10:15.404414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:11.965 pt1 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.965 malloc2 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.965 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.965 [2024-11-18 03:10:15.444521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:11.965 [2024-11-18 03:10:15.444624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.965 [2024-11-18 03:10:15.444659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:11.965 [2024-11-18 03:10:15.444690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.965 [2024-11-18 03:10:15.446825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.966 [2024-11-18 03:10:15.446912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:11.966 pt2 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.966 malloc3 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.966 [2024-11-18 03:10:15.473344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:11.966 [2024-11-18 03:10:15.473460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.966 [2024-11-18 03:10:15.473497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:11.966 [2024-11-18 03:10:15.473527] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.966 [2024-11-18 03:10:15.475635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.966 [2024-11-18 03:10:15.475714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:11.966 pt3 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.966 malloc4 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.966 [2024-11-18 03:10:15.506043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:11.966 [2024-11-18 03:10:15.506142] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.966 [2024-11-18 03:10:15.506175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:11.966 [2024-11-18 03:10:15.506206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.966 [2024-11-18 03:10:15.508338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.966 [2024-11-18 03:10:15.508429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:11.966 pt4 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.966 [2024-11-18 03:10:15.518100] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:11.966 [2024-11-18 03:10:15.519992] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:11.966 [2024-11-18 03:10:15.520090] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:11.966 [2024-11-18 03:10:15.520174] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:11.966 [2024-11-18 03:10:15.520367] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:11.966 [2024-11-18 03:10:15.520423] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:11.966 [2024-11-18 03:10:15.520700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:11.966 [2024-11-18 03:10:15.520894] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:11.966 [2024-11-18 03:10:15.520945] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:11.966 [2024-11-18 03:10:15.521142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.966 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.225 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.225 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.225 "name": "raid_bdev1", 00:10:12.225 "uuid": "e79cb1b6-e58f-49da-935e-f5d43482cd6c", 00:10:12.225 "strip_size_kb": 64, 00:10:12.225 "state": "online", 00:10:12.225 "raid_level": "raid0", 00:10:12.225 "superblock": true, 00:10:12.225 "num_base_bdevs": 4, 00:10:12.225 "num_base_bdevs_discovered": 4, 00:10:12.225 "num_base_bdevs_operational": 4, 00:10:12.225 "base_bdevs_list": [ 00:10:12.225 { 00:10:12.225 "name": "pt1", 00:10:12.225 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:12.226 "is_configured": true, 00:10:12.226 "data_offset": 2048, 00:10:12.226 "data_size": 63488 00:10:12.226 }, 00:10:12.226 { 00:10:12.226 "name": "pt2", 00:10:12.226 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.226 "is_configured": true, 00:10:12.226 "data_offset": 2048, 00:10:12.226 "data_size": 63488 00:10:12.226 }, 00:10:12.226 { 00:10:12.226 "name": "pt3", 00:10:12.226 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:12.226 "is_configured": true, 00:10:12.226 "data_offset": 2048, 00:10:12.226 "data_size": 63488 00:10:12.226 }, 00:10:12.226 { 00:10:12.226 "name": "pt4", 00:10:12.226 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:12.226 "is_configured": true, 00:10:12.226 "data_offset": 2048, 00:10:12.226 "data_size": 63488 00:10:12.226 } 00:10:12.226 ] 00:10:12.226 }' 00:10:12.226 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.226 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.486 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:12.486 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:12.486 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:12.486 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:12.486 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:12.486 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:12.486 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:12.486 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.486 03:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:12.486 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.486 [2024-11-18 03:10:15.969608] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:12.486 03:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.486 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:12.486 "name": "raid_bdev1", 00:10:12.486 "aliases": [ 00:10:12.486 "e79cb1b6-e58f-49da-935e-f5d43482cd6c" 00:10:12.486 ], 00:10:12.486 "product_name": "Raid Volume", 00:10:12.486 "block_size": 512, 00:10:12.486 "num_blocks": 253952, 00:10:12.486 "uuid": "e79cb1b6-e58f-49da-935e-f5d43482cd6c", 00:10:12.486 "assigned_rate_limits": { 00:10:12.486 "rw_ios_per_sec": 0, 00:10:12.486 "rw_mbytes_per_sec": 0, 00:10:12.486 "r_mbytes_per_sec": 0, 00:10:12.486 "w_mbytes_per_sec": 0 00:10:12.486 }, 00:10:12.486 "claimed": false, 00:10:12.486 "zoned": false, 00:10:12.486 "supported_io_types": { 00:10:12.486 "read": true, 00:10:12.486 "write": true, 00:10:12.486 "unmap": true, 00:10:12.486 "flush": true, 00:10:12.486 "reset": true, 00:10:12.486 "nvme_admin": false, 00:10:12.486 "nvme_io": false, 00:10:12.486 "nvme_io_md": false, 00:10:12.486 "write_zeroes": true, 00:10:12.486 "zcopy": false, 00:10:12.486 "get_zone_info": false, 00:10:12.486 "zone_management": false, 00:10:12.486 "zone_append": false, 00:10:12.486 "compare": false, 00:10:12.486 "compare_and_write": false, 00:10:12.486 "abort": false, 00:10:12.486 "seek_hole": false, 00:10:12.486 "seek_data": false, 00:10:12.486 "copy": false, 00:10:12.486 "nvme_iov_md": false 00:10:12.486 }, 00:10:12.486 "memory_domains": [ 00:10:12.486 { 00:10:12.486 "dma_device_id": "system", 00:10:12.486 "dma_device_type": 1 00:10:12.486 }, 00:10:12.486 { 00:10:12.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.486 "dma_device_type": 2 00:10:12.486 }, 00:10:12.486 { 00:10:12.486 "dma_device_id": "system", 00:10:12.486 "dma_device_type": 1 00:10:12.486 }, 00:10:12.486 { 00:10:12.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.486 "dma_device_type": 2 00:10:12.486 }, 00:10:12.486 { 00:10:12.486 "dma_device_id": "system", 00:10:12.486 "dma_device_type": 1 00:10:12.486 }, 00:10:12.486 { 00:10:12.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.486 "dma_device_type": 2 00:10:12.486 }, 00:10:12.486 { 00:10:12.486 "dma_device_id": "system", 00:10:12.486 "dma_device_type": 1 00:10:12.486 }, 00:10:12.486 { 00:10:12.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.486 "dma_device_type": 2 00:10:12.486 } 00:10:12.486 ], 00:10:12.486 "driver_specific": { 00:10:12.486 "raid": { 00:10:12.486 "uuid": "e79cb1b6-e58f-49da-935e-f5d43482cd6c", 00:10:12.486 "strip_size_kb": 64, 00:10:12.486 "state": "online", 00:10:12.486 "raid_level": "raid0", 00:10:12.486 "superblock": true, 00:10:12.486 "num_base_bdevs": 4, 00:10:12.486 "num_base_bdevs_discovered": 4, 00:10:12.486 "num_base_bdevs_operational": 4, 00:10:12.486 "base_bdevs_list": [ 00:10:12.486 { 00:10:12.486 "name": "pt1", 00:10:12.486 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:12.486 "is_configured": true, 00:10:12.486 "data_offset": 2048, 00:10:12.486 "data_size": 63488 00:10:12.486 }, 00:10:12.486 { 00:10:12.486 "name": "pt2", 00:10:12.486 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.486 "is_configured": true, 00:10:12.486 "data_offset": 2048, 00:10:12.486 "data_size": 63488 00:10:12.486 }, 00:10:12.486 { 00:10:12.486 "name": "pt3", 00:10:12.486 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:12.486 "is_configured": true, 00:10:12.486 "data_offset": 2048, 00:10:12.487 "data_size": 63488 00:10:12.487 }, 00:10:12.487 { 00:10:12.487 "name": "pt4", 00:10:12.487 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:12.487 "is_configured": true, 00:10:12.487 "data_offset": 2048, 00:10:12.487 "data_size": 63488 00:10:12.487 } 00:10:12.487 ] 00:10:12.487 } 00:10:12.487 } 00:10:12.487 }' 00:10:12.487 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:12.746 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:12.746 pt2 00:10:12.746 pt3 00:10:12.746 pt4' 00:10:12.746 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.746 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:12.746 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.746 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:12.746 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.746 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.746 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.746 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.746 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.746 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.746 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.746 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:12.746 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.746 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.746 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.746 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.746 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.746 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.747 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.747 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:12.747 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.747 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.747 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.747 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.747 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.747 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.747 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.747 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:12.747 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.747 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.747 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.747 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.747 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.747 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.747 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:12.747 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.747 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.747 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:13.007 [2024-11-18 03:10:16.325059] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.007 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.007 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e79cb1b6-e58f-49da-935e-f5d43482cd6c 00:10:13.007 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e79cb1b6-e58f-49da-935e-f5d43482cd6c ']' 00:10:13.007 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:13.007 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.007 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.007 [2024-11-18 03:10:16.372616] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:13.007 [2024-11-18 03:10:16.372721] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:13.007 [2024-11-18 03:10:16.372833] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.007 [2024-11-18 03:10:16.372927] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:13.007 [2024-11-18 03:10:16.373046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:13.007 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.007 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.007 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.007 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:13.007 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.007 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.008 [2024-11-18 03:10:16.536364] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:13.008 [2024-11-18 03:10:16.538416] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:13.008 [2024-11-18 03:10:16.538509] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:13.008 [2024-11-18 03:10:16.538559] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:13.008 [2024-11-18 03:10:16.538642] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:13.008 [2024-11-18 03:10:16.538711] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:13.008 [2024-11-18 03:10:16.538779] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:13.008 [2024-11-18 03:10:16.538842] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:13.008 [2024-11-18 03:10:16.538933] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:13.008 [2024-11-18 03:10:16.538976] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:10:13.008 request: 00:10:13.008 { 00:10:13.008 "name": "raid_bdev1", 00:10:13.008 "raid_level": "raid0", 00:10:13.008 "base_bdevs": [ 00:10:13.008 "malloc1", 00:10:13.008 "malloc2", 00:10:13.008 "malloc3", 00:10:13.008 "malloc4" 00:10:13.008 ], 00:10:13.008 "strip_size_kb": 64, 00:10:13.008 "superblock": false, 00:10:13.008 "method": "bdev_raid_create", 00:10:13.008 "req_id": 1 00:10:13.008 } 00:10:13.008 Got JSON-RPC error response 00:10:13.008 response: 00:10:13.008 { 00:10:13.008 "code": -17, 00:10:13.008 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:13.008 } 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.008 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.268 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:13.268 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:13.268 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:13.268 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.268 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.268 [2024-11-18 03:10:16.604187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:13.268 [2024-11-18 03:10:16.604299] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.268 [2024-11-18 03:10:16.604353] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:13.268 [2024-11-18 03:10:16.604404] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.268 [2024-11-18 03:10:16.606693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.268 [2024-11-18 03:10:16.606765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:13.268 [2024-11-18 03:10:16.606861] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:13.268 [2024-11-18 03:10:16.606945] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:13.268 pt1 00:10:13.268 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.268 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:13.268 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:13.268 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.268 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.268 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.268 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.268 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.268 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.268 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.268 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.268 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.268 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.268 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.268 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.268 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.268 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.268 "name": "raid_bdev1", 00:10:13.268 "uuid": "e79cb1b6-e58f-49da-935e-f5d43482cd6c", 00:10:13.268 "strip_size_kb": 64, 00:10:13.268 "state": "configuring", 00:10:13.268 "raid_level": "raid0", 00:10:13.268 "superblock": true, 00:10:13.268 "num_base_bdevs": 4, 00:10:13.268 "num_base_bdevs_discovered": 1, 00:10:13.268 "num_base_bdevs_operational": 4, 00:10:13.268 "base_bdevs_list": [ 00:10:13.268 { 00:10:13.268 "name": "pt1", 00:10:13.268 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:13.268 "is_configured": true, 00:10:13.268 "data_offset": 2048, 00:10:13.268 "data_size": 63488 00:10:13.268 }, 00:10:13.268 { 00:10:13.268 "name": null, 00:10:13.268 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.268 "is_configured": false, 00:10:13.268 "data_offset": 2048, 00:10:13.268 "data_size": 63488 00:10:13.268 }, 00:10:13.269 { 00:10:13.269 "name": null, 00:10:13.269 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:13.269 "is_configured": false, 00:10:13.269 "data_offset": 2048, 00:10:13.269 "data_size": 63488 00:10:13.269 }, 00:10:13.269 { 00:10:13.269 "name": null, 00:10:13.269 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:13.269 "is_configured": false, 00:10:13.269 "data_offset": 2048, 00:10:13.269 "data_size": 63488 00:10:13.269 } 00:10:13.269 ] 00:10:13.269 }' 00:10:13.269 03:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.269 03:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.528 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:13.529 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:13.529 03:10:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.529 03:10:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.529 [2024-11-18 03:10:17.063453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:13.529 [2024-11-18 03:10:17.063579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.529 [2024-11-18 03:10:17.063623] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:13.529 [2024-11-18 03:10:17.063655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.529 [2024-11-18 03:10:17.064155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.529 [2024-11-18 03:10:17.064218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:13.529 [2024-11-18 03:10:17.064344] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:13.529 [2024-11-18 03:10:17.064399] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:13.529 pt2 00:10:13.529 03:10:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.529 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:13.529 03:10:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.529 03:10:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.529 [2024-11-18 03:10:17.075437] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:13.529 03:10:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.529 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:13.529 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:13.529 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.529 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.529 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.529 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.529 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.529 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.529 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.529 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.529 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.529 03:10:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.529 03:10:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.529 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.529 03:10:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.789 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.789 "name": "raid_bdev1", 00:10:13.789 "uuid": "e79cb1b6-e58f-49da-935e-f5d43482cd6c", 00:10:13.789 "strip_size_kb": 64, 00:10:13.789 "state": "configuring", 00:10:13.789 "raid_level": "raid0", 00:10:13.789 "superblock": true, 00:10:13.789 "num_base_bdevs": 4, 00:10:13.789 "num_base_bdevs_discovered": 1, 00:10:13.789 "num_base_bdevs_operational": 4, 00:10:13.789 "base_bdevs_list": [ 00:10:13.789 { 00:10:13.789 "name": "pt1", 00:10:13.789 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:13.789 "is_configured": true, 00:10:13.789 "data_offset": 2048, 00:10:13.789 "data_size": 63488 00:10:13.789 }, 00:10:13.789 { 00:10:13.789 "name": null, 00:10:13.789 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.789 "is_configured": false, 00:10:13.789 "data_offset": 0, 00:10:13.789 "data_size": 63488 00:10:13.789 }, 00:10:13.789 { 00:10:13.789 "name": null, 00:10:13.789 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:13.789 "is_configured": false, 00:10:13.789 "data_offset": 2048, 00:10:13.789 "data_size": 63488 00:10:13.789 }, 00:10:13.789 { 00:10:13.789 "name": null, 00:10:13.789 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:13.789 "is_configured": false, 00:10:13.789 "data_offset": 2048, 00:10:13.789 "data_size": 63488 00:10:13.789 } 00:10:13.789 ] 00:10:13.789 }' 00:10:13.789 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.789 03:10:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.048 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:14.048 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:14.048 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:14.048 03:10:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.048 03:10:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.048 [2024-11-18 03:10:17.526681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:14.048 [2024-11-18 03:10:17.526815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.048 [2024-11-18 03:10:17.526853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:14.048 [2024-11-18 03:10:17.526887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.048 [2024-11-18 03:10:17.527350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.048 [2024-11-18 03:10:17.527417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:14.048 [2024-11-18 03:10:17.527522] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:14.048 [2024-11-18 03:10:17.527579] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:14.048 pt2 00:10:14.048 03:10:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.048 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:14.048 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:14.048 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:14.048 03:10:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.048 03:10:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.048 [2024-11-18 03:10:17.538609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:14.048 [2024-11-18 03:10:17.538706] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.048 [2024-11-18 03:10:17.538741] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:14.048 [2024-11-18 03:10:17.538768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.048 [2024-11-18 03:10:17.539186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.048 [2024-11-18 03:10:17.539251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:14.049 [2024-11-18 03:10:17.539346] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:14.049 [2024-11-18 03:10:17.539400] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:14.049 pt3 00:10:14.049 03:10:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.049 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:14.049 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:14.049 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:14.049 03:10:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.049 03:10:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.049 [2024-11-18 03:10:17.550591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:14.049 [2024-11-18 03:10:17.550699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.049 [2024-11-18 03:10:17.550719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:14.049 [2024-11-18 03:10:17.550730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.049 [2024-11-18 03:10:17.551056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.049 [2024-11-18 03:10:17.551076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:14.049 [2024-11-18 03:10:17.551136] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:14.049 [2024-11-18 03:10:17.551156] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:14.049 [2024-11-18 03:10:17.551251] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:14.049 [2024-11-18 03:10:17.551264] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:14.049 [2024-11-18 03:10:17.551492] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:14.049 [2024-11-18 03:10:17.551605] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:14.049 [2024-11-18 03:10:17.551614] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:14.049 [2024-11-18 03:10:17.551708] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:14.049 pt4 00:10:14.049 03:10:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.049 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:14.049 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:14.049 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:14.049 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:14.049 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:14.049 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.049 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.049 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.049 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.049 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.049 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.049 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.049 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.049 03:10:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.049 03:10:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.049 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.049 03:10:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.049 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.049 "name": "raid_bdev1", 00:10:14.049 "uuid": "e79cb1b6-e58f-49da-935e-f5d43482cd6c", 00:10:14.049 "strip_size_kb": 64, 00:10:14.049 "state": "online", 00:10:14.049 "raid_level": "raid0", 00:10:14.049 "superblock": true, 00:10:14.049 "num_base_bdevs": 4, 00:10:14.049 "num_base_bdevs_discovered": 4, 00:10:14.049 "num_base_bdevs_operational": 4, 00:10:14.049 "base_bdevs_list": [ 00:10:14.049 { 00:10:14.049 "name": "pt1", 00:10:14.049 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:14.049 "is_configured": true, 00:10:14.049 "data_offset": 2048, 00:10:14.049 "data_size": 63488 00:10:14.049 }, 00:10:14.049 { 00:10:14.049 "name": "pt2", 00:10:14.049 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:14.049 "is_configured": true, 00:10:14.049 "data_offset": 2048, 00:10:14.049 "data_size": 63488 00:10:14.049 }, 00:10:14.049 { 00:10:14.049 "name": "pt3", 00:10:14.049 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:14.049 "is_configured": true, 00:10:14.049 "data_offset": 2048, 00:10:14.049 "data_size": 63488 00:10:14.049 }, 00:10:14.049 { 00:10:14.049 "name": "pt4", 00:10:14.049 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:14.049 "is_configured": true, 00:10:14.049 "data_offset": 2048, 00:10:14.049 "data_size": 63488 00:10:14.049 } 00:10:14.049 ] 00:10:14.049 }' 00:10:14.049 03:10:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.049 03:10:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.619 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:14.619 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:14.619 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:14.619 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:14.619 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:14.619 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:14.619 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:14.619 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:14.619 03:10:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.619 03:10:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.619 [2024-11-18 03:10:18.014158] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:14.619 03:10:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.619 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:14.619 "name": "raid_bdev1", 00:10:14.619 "aliases": [ 00:10:14.619 "e79cb1b6-e58f-49da-935e-f5d43482cd6c" 00:10:14.619 ], 00:10:14.619 "product_name": "Raid Volume", 00:10:14.619 "block_size": 512, 00:10:14.619 "num_blocks": 253952, 00:10:14.619 "uuid": "e79cb1b6-e58f-49da-935e-f5d43482cd6c", 00:10:14.619 "assigned_rate_limits": { 00:10:14.619 "rw_ios_per_sec": 0, 00:10:14.619 "rw_mbytes_per_sec": 0, 00:10:14.619 "r_mbytes_per_sec": 0, 00:10:14.619 "w_mbytes_per_sec": 0 00:10:14.619 }, 00:10:14.619 "claimed": false, 00:10:14.619 "zoned": false, 00:10:14.619 "supported_io_types": { 00:10:14.619 "read": true, 00:10:14.619 "write": true, 00:10:14.619 "unmap": true, 00:10:14.619 "flush": true, 00:10:14.619 "reset": true, 00:10:14.619 "nvme_admin": false, 00:10:14.619 "nvme_io": false, 00:10:14.619 "nvme_io_md": false, 00:10:14.619 "write_zeroes": true, 00:10:14.619 "zcopy": false, 00:10:14.619 "get_zone_info": false, 00:10:14.619 "zone_management": false, 00:10:14.619 "zone_append": false, 00:10:14.619 "compare": false, 00:10:14.619 "compare_and_write": false, 00:10:14.619 "abort": false, 00:10:14.619 "seek_hole": false, 00:10:14.619 "seek_data": false, 00:10:14.619 "copy": false, 00:10:14.619 "nvme_iov_md": false 00:10:14.619 }, 00:10:14.619 "memory_domains": [ 00:10:14.619 { 00:10:14.619 "dma_device_id": "system", 00:10:14.619 "dma_device_type": 1 00:10:14.619 }, 00:10:14.619 { 00:10:14.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.619 "dma_device_type": 2 00:10:14.619 }, 00:10:14.619 { 00:10:14.619 "dma_device_id": "system", 00:10:14.619 "dma_device_type": 1 00:10:14.619 }, 00:10:14.619 { 00:10:14.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.619 "dma_device_type": 2 00:10:14.619 }, 00:10:14.619 { 00:10:14.619 "dma_device_id": "system", 00:10:14.619 "dma_device_type": 1 00:10:14.619 }, 00:10:14.619 { 00:10:14.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.619 "dma_device_type": 2 00:10:14.619 }, 00:10:14.619 { 00:10:14.619 "dma_device_id": "system", 00:10:14.619 "dma_device_type": 1 00:10:14.619 }, 00:10:14.619 { 00:10:14.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.619 "dma_device_type": 2 00:10:14.619 } 00:10:14.619 ], 00:10:14.619 "driver_specific": { 00:10:14.619 "raid": { 00:10:14.619 "uuid": "e79cb1b6-e58f-49da-935e-f5d43482cd6c", 00:10:14.619 "strip_size_kb": 64, 00:10:14.619 "state": "online", 00:10:14.619 "raid_level": "raid0", 00:10:14.619 "superblock": true, 00:10:14.619 "num_base_bdevs": 4, 00:10:14.619 "num_base_bdevs_discovered": 4, 00:10:14.619 "num_base_bdevs_operational": 4, 00:10:14.619 "base_bdevs_list": [ 00:10:14.619 { 00:10:14.619 "name": "pt1", 00:10:14.619 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:14.619 "is_configured": true, 00:10:14.619 "data_offset": 2048, 00:10:14.619 "data_size": 63488 00:10:14.619 }, 00:10:14.619 { 00:10:14.619 "name": "pt2", 00:10:14.619 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:14.619 "is_configured": true, 00:10:14.619 "data_offset": 2048, 00:10:14.619 "data_size": 63488 00:10:14.619 }, 00:10:14.619 { 00:10:14.619 "name": "pt3", 00:10:14.619 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:14.619 "is_configured": true, 00:10:14.619 "data_offset": 2048, 00:10:14.619 "data_size": 63488 00:10:14.619 }, 00:10:14.619 { 00:10:14.619 "name": "pt4", 00:10:14.619 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:14.619 "is_configured": true, 00:10:14.619 "data_offset": 2048, 00:10:14.619 "data_size": 63488 00:10:14.619 } 00:10:14.619 ] 00:10:14.619 } 00:10:14.619 } 00:10:14.619 }' 00:10:14.619 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:14.619 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:14.619 pt2 00:10:14.619 pt3 00:10:14.619 pt4' 00:10:14.619 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.619 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:14.619 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.619 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.619 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:14.619 03:10:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.619 03:10:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.619 03:10:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.619 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.620 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.620 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.620 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:14.620 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.620 03:10:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.620 03:10:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.620 03:10:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.880 [2024-11-18 03:10:18.309625] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e79cb1b6-e58f-49da-935e-f5d43482cd6c '!=' e79cb1b6-e58f-49da-935e-f5d43482cd6c ']' 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81808 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81808 ']' 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81808 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81808 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81808' 00:10:14.880 killing process with pid 81808 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 81808 00:10:14.880 [2024-11-18 03:10:18.382007] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:14.880 [2024-11-18 03:10:18.382169] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.880 03:10:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 81808 00:10:14.880 [2024-11-18 03:10:18.382277] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:14.880 [2024-11-18 03:10:18.382293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:14.880 [2024-11-18 03:10:18.427253] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:15.140 03:10:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:15.140 00:10:15.140 real 0m4.214s 00:10:15.140 user 0m6.670s 00:10:15.140 sys 0m0.893s 00:10:15.140 03:10:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:15.140 03:10:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.140 ************************************ 00:10:15.140 END TEST raid_superblock_test 00:10:15.140 ************************************ 00:10:15.400 03:10:18 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:15.400 03:10:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:15.400 03:10:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:15.400 03:10:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:15.400 ************************************ 00:10:15.400 START TEST raid_read_error_test 00:10:15.400 ************************************ 00:10:15.400 03:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:10:15.400 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:15.400 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:15.400 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:15.400 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:15.400 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.400 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:15.400 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:15.400 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.400 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:15.400 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:15.400 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.400 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:15.401 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:15.401 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.401 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:15.401 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:15.401 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.401 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:15.401 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:15.401 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:15.401 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:15.401 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:15.401 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:15.401 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:15.401 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:15.401 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:15.401 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:15.401 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:15.401 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zIagH4yXxy 00:10:15.401 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=82056 00:10:15.401 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:15.401 03:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 82056 00:10:15.401 03:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 82056 ']' 00:10:15.401 03:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.401 03:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:15.401 03:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.401 03:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:15.401 03:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.401 [2024-11-18 03:10:18.843371] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:15.401 [2024-11-18 03:10:18.843583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82056 ] 00:10:15.661 [2024-11-18 03:10:19.003831] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.661 [2024-11-18 03:10:19.054291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.661 [2024-11-18 03:10:19.096885] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.661 [2024-11-18 03:10:19.096922] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.231 BaseBdev1_malloc 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.231 true 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.231 [2024-11-18 03:10:19.747423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:16.231 [2024-11-18 03:10:19.747524] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.231 [2024-11-18 03:10:19.747569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:16.231 [2024-11-18 03:10:19.747582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.231 [2024-11-18 03:10:19.749713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.231 [2024-11-18 03:10:19.749752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:16.231 BaseBdev1 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.231 BaseBdev2_malloc 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.231 true 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.231 [2024-11-18 03:10:19.791472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:16.231 [2024-11-18 03:10:19.791575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.231 [2024-11-18 03:10:19.791611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:16.231 [2024-11-18 03:10:19.791650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.231 [2024-11-18 03:10:19.793717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.231 [2024-11-18 03:10:19.793789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:16.231 BaseBdev2 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.231 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.491 BaseBdev3_malloc 00:10:16.491 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.491 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:16.491 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.491 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.491 true 00:10:16.491 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.491 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:16.491 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.491 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.491 [2024-11-18 03:10:19.832306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:16.491 [2024-11-18 03:10:19.832445] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.491 [2024-11-18 03:10:19.832483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:16.491 [2024-11-18 03:10:19.832516] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.491 [2024-11-18 03:10:19.834588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.491 [2024-11-18 03:10:19.834662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:16.491 BaseBdev3 00:10:16.491 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.491 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:16.491 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:16.491 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.491 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.491 BaseBdev4_malloc 00:10:16.491 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.492 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:16.492 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.492 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.492 true 00:10:16.492 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.492 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:16.492 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.492 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.492 [2024-11-18 03:10:19.873179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:16.492 [2024-11-18 03:10:19.873287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.492 [2024-11-18 03:10:19.873335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:16.492 [2024-11-18 03:10:19.873366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.492 [2024-11-18 03:10:19.875416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.492 [2024-11-18 03:10:19.875494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:16.492 BaseBdev4 00:10:16.492 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.492 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:16.492 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.492 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.492 [2024-11-18 03:10:19.885204] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:16.492 [2024-11-18 03:10:19.887096] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:16.492 [2024-11-18 03:10:19.887241] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:16.492 [2024-11-18 03:10:19.887336] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:16.492 [2024-11-18 03:10:19.887562] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:16.492 [2024-11-18 03:10:19.887610] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:16.492 [2024-11-18 03:10:19.887885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:16.492 [2024-11-18 03:10:19.888067] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:16.492 [2024-11-18 03:10:19.888115] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:16.492 [2024-11-18 03:10:19.888277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.492 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.492 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:16.492 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.492 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.492 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.492 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.492 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.492 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.492 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.492 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.492 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.492 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.492 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.492 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.492 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.492 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.492 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.492 "name": "raid_bdev1", 00:10:16.492 "uuid": "b928742c-c09f-4958-8003-127b0ebcbc77", 00:10:16.492 "strip_size_kb": 64, 00:10:16.492 "state": "online", 00:10:16.492 "raid_level": "raid0", 00:10:16.492 "superblock": true, 00:10:16.492 "num_base_bdevs": 4, 00:10:16.492 "num_base_bdevs_discovered": 4, 00:10:16.492 "num_base_bdevs_operational": 4, 00:10:16.492 "base_bdevs_list": [ 00:10:16.492 { 00:10:16.492 "name": "BaseBdev1", 00:10:16.492 "uuid": "aa8dd2c5-e565-5255-a640-49c3c0d9ea80", 00:10:16.492 "is_configured": true, 00:10:16.492 "data_offset": 2048, 00:10:16.492 "data_size": 63488 00:10:16.492 }, 00:10:16.492 { 00:10:16.492 "name": "BaseBdev2", 00:10:16.492 "uuid": "0cff82df-fa7c-5496-8331-a36469101b8d", 00:10:16.492 "is_configured": true, 00:10:16.492 "data_offset": 2048, 00:10:16.492 "data_size": 63488 00:10:16.492 }, 00:10:16.492 { 00:10:16.492 "name": "BaseBdev3", 00:10:16.492 "uuid": "0ea7728f-5ecf-5a2b-889e-50541d3db8ba", 00:10:16.492 "is_configured": true, 00:10:16.492 "data_offset": 2048, 00:10:16.492 "data_size": 63488 00:10:16.492 }, 00:10:16.492 { 00:10:16.492 "name": "BaseBdev4", 00:10:16.492 "uuid": "7a2af962-d717-540b-9899-b6f0a3d5fdb6", 00:10:16.492 "is_configured": true, 00:10:16.492 "data_offset": 2048, 00:10:16.492 "data_size": 63488 00:10:16.492 } 00:10:16.492 ] 00:10:16.492 }' 00:10:16.492 03:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.492 03:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.060 03:10:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:17.060 03:10:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:17.060 [2024-11-18 03:10:20.448600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:17.998 03:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:17.998 03:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.998 03:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.998 03:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.998 03:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:17.998 03:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:17.998 03:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:17.998 03:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:17.998 03:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.998 03:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.998 03:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.998 03:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.999 03:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.999 03:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.999 03:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.999 03:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.999 03:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.999 03:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.999 03:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.999 03:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.999 03:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.999 03:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.999 03:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.999 "name": "raid_bdev1", 00:10:17.999 "uuid": "b928742c-c09f-4958-8003-127b0ebcbc77", 00:10:17.999 "strip_size_kb": 64, 00:10:17.999 "state": "online", 00:10:17.999 "raid_level": "raid0", 00:10:17.999 "superblock": true, 00:10:17.999 "num_base_bdevs": 4, 00:10:17.999 "num_base_bdevs_discovered": 4, 00:10:17.999 "num_base_bdevs_operational": 4, 00:10:17.999 "base_bdevs_list": [ 00:10:17.999 { 00:10:17.999 "name": "BaseBdev1", 00:10:17.999 "uuid": "aa8dd2c5-e565-5255-a640-49c3c0d9ea80", 00:10:17.999 "is_configured": true, 00:10:17.999 "data_offset": 2048, 00:10:17.999 "data_size": 63488 00:10:17.999 }, 00:10:17.999 { 00:10:17.999 "name": "BaseBdev2", 00:10:17.999 "uuid": "0cff82df-fa7c-5496-8331-a36469101b8d", 00:10:17.999 "is_configured": true, 00:10:17.999 "data_offset": 2048, 00:10:17.999 "data_size": 63488 00:10:17.999 }, 00:10:17.999 { 00:10:17.999 "name": "BaseBdev3", 00:10:17.999 "uuid": "0ea7728f-5ecf-5a2b-889e-50541d3db8ba", 00:10:17.999 "is_configured": true, 00:10:17.999 "data_offset": 2048, 00:10:17.999 "data_size": 63488 00:10:17.999 }, 00:10:17.999 { 00:10:17.999 "name": "BaseBdev4", 00:10:17.999 "uuid": "7a2af962-d717-540b-9899-b6f0a3d5fdb6", 00:10:17.999 "is_configured": true, 00:10:17.999 "data_offset": 2048, 00:10:17.999 "data_size": 63488 00:10:17.999 } 00:10:17.999 ] 00:10:17.999 }' 00:10:17.999 03:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.999 03:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.260 03:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:18.260 03:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.260 03:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.260 [2024-11-18 03:10:21.788386] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:18.260 [2024-11-18 03:10:21.788474] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:18.260 [2024-11-18 03:10:21.791109] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:18.260 [2024-11-18 03:10:21.791198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.260 [2024-11-18 03:10:21.791282] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:18.260 [2024-11-18 03:10:21.791343] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:18.260 { 00:10:18.260 "results": [ 00:10:18.260 { 00:10:18.260 "job": "raid_bdev1", 00:10:18.260 "core_mask": "0x1", 00:10:18.260 "workload": "randrw", 00:10:18.260 "percentage": 50, 00:10:18.260 "status": "finished", 00:10:18.260 "queue_depth": 1, 00:10:18.260 "io_size": 131072, 00:10:18.260 "runtime": 1.34052, 00:10:18.260 "iops": 16101.960433264703, 00:10:18.260 "mibps": 2012.745054158088, 00:10:18.260 "io_failed": 1, 00:10:18.260 "io_timeout": 0, 00:10:18.260 "avg_latency_us": 86.15069722127029, 00:10:18.260 "min_latency_us": 26.270742358078603, 00:10:18.260 "max_latency_us": 1609.7816593886462 00:10:18.260 } 00:10:18.260 ], 00:10:18.260 "core_count": 1 00:10:18.260 } 00:10:18.260 03:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.260 03:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 82056 00:10:18.260 03:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 82056 ']' 00:10:18.260 03:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 82056 00:10:18.260 03:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:18.260 03:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:18.260 03:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82056 00:10:18.260 killing process with pid 82056 00:10:18.260 03:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:18.260 03:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:18.260 03:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82056' 00:10:18.260 03:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 82056 00:10:18.260 [2024-11-18 03:10:21.833161] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:18.260 03:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 82056 00:10:18.519 [2024-11-18 03:10:21.869150] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:18.778 03:10:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:18.778 03:10:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zIagH4yXxy 00:10:18.778 03:10:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:18.778 03:10:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:10:18.778 03:10:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:18.778 03:10:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:18.778 03:10:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:18.778 03:10:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:10:18.778 00:10:18.778 real 0m3.372s 00:10:18.778 user 0m4.313s 00:10:18.778 sys 0m0.529s 00:10:18.778 03:10:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:18.778 03:10:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.778 ************************************ 00:10:18.778 END TEST raid_read_error_test 00:10:18.778 ************************************ 00:10:18.778 03:10:22 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:18.778 03:10:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:18.778 03:10:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:18.778 03:10:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:18.778 ************************************ 00:10:18.778 START TEST raid_write_error_test 00:10:18.778 ************************************ 00:10:18.778 03:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:10:18.778 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:18.778 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:18.778 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:18.778 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:18.778 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.778 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:18.778 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:18.778 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.778 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:18.778 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:18.778 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.778 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:18.778 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:18.778 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.779 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:18.779 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:18.779 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.779 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:18.779 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:18.779 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:18.779 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:18.779 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:18.779 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:18.779 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:18.779 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:18.779 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:18.779 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:18.779 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:18.779 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.DHetMsqVW5 00:10:18.779 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=82185 00:10:18.779 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:18.779 03:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 82185 00:10:18.779 03:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 82185 ']' 00:10:18.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.779 03:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.779 03:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:18.779 03:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.779 03:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:18.779 03:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.779 [2024-11-18 03:10:22.286735] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:18.779 [2024-11-18 03:10:22.286866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82185 ] 00:10:19.038 [2024-11-18 03:10:22.448911] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.038 [2024-11-18 03:10:22.500677] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.038 [2024-11-18 03:10:22.544192] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.038 [2024-11-18 03:10:22.544299] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.606 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:19.606 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:19.606 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:19.606 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:19.606 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.606 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.606 BaseBdev1_malloc 00:10:19.606 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.606 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:19.606 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.606 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.606 true 00:10:19.606 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.606 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:19.606 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.606 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.606 [2024-11-18 03:10:23.159044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:19.606 [2024-11-18 03:10:23.159166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.606 [2024-11-18 03:10:23.159213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:19.606 [2024-11-18 03:10:23.159268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.606 [2024-11-18 03:10:23.161409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.606 [2024-11-18 03:10:23.161480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:19.606 BaseBdev1 00:10:19.606 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.606 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:19.606 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:19.606 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.606 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.866 BaseBdev2_malloc 00:10:19.866 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.866 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:19.866 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.866 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.866 true 00:10:19.866 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.866 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:19.866 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.866 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.866 [2024-11-18 03:10:23.208662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:19.866 [2024-11-18 03:10:23.208776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.866 [2024-11-18 03:10:23.208801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:19.867 [2024-11-18 03:10:23.208809] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.867 [2024-11-18 03:10:23.210861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.867 [2024-11-18 03:10:23.210899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:19.867 BaseBdev2 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.867 BaseBdev3_malloc 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.867 true 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.867 [2024-11-18 03:10:23.249371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:19.867 [2024-11-18 03:10:23.249466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.867 [2024-11-18 03:10:23.249518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:19.867 [2024-11-18 03:10:23.249550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.867 [2024-11-18 03:10:23.251634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.867 [2024-11-18 03:10:23.251709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:19.867 BaseBdev3 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.867 BaseBdev4_malloc 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.867 true 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.867 [2024-11-18 03:10:23.290098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:19.867 [2024-11-18 03:10:23.290191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.867 [2024-11-18 03:10:23.290245] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:19.867 [2024-11-18 03:10:23.290277] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.867 [2024-11-18 03:10:23.292324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.867 [2024-11-18 03:10:23.292397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:19.867 BaseBdev4 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.867 [2024-11-18 03:10:23.302132] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:19.867 [2024-11-18 03:10:23.304032] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:19.867 [2024-11-18 03:10:23.304162] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:19.867 [2024-11-18 03:10:23.304256] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:19.867 [2024-11-18 03:10:23.304481] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:19.867 [2024-11-18 03:10:23.304529] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:19.867 [2024-11-18 03:10:23.304789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:19.867 [2024-11-18 03:10:23.304980] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:19.867 [2024-11-18 03:10:23.305028] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:19.867 [2024-11-18 03:10:23.305205] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.867 "name": "raid_bdev1", 00:10:19.867 "uuid": "0eec8061-c9a4-46db-9c3e-17cb50cd07cb", 00:10:19.867 "strip_size_kb": 64, 00:10:19.867 "state": "online", 00:10:19.867 "raid_level": "raid0", 00:10:19.867 "superblock": true, 00:10:19.867 "num_base_bdevs": 4, 00:10:19.867 "num_base_bdevs_discovered": 4, 00:10:19.867 "num_base_bdevs_operational": 4, 00:10:19.867 "base_bdevs_list": [ 00:10:19.867 { 00:10:19.867 "name": "BaseBdev1", 00:10:19.867 "uuid": "29595201-1e20-5133-96ba-f41f95f98567", 00:10:19.867 "is_configured": true, 00:10:19.867 "data_offset": 2048, 00:10:19.867 "data_size": 63488 00:10:19.867 }, 00:10:19.867 { 00:10:19.867 "name": "BaseBdev2", 00:10:19.867 "uuid": "e849011e-1cc7-50a1-8af9-838ee1e44493", 00:10:19.867 "is_configured": true, 00:10:19.867 "data_offset": 2048, 00:10:19.867 "data_size": 63488 00:10:19.867 }, 00:10:19.867 { 00:10:19.867 "name": "BaseBdev3", 00:10:19.867 "uuid": "1bf2e1cf-21c8-5d56-a55d-2c3a8b41dcaa", 00:10:19.867 "is_configured": true, 00:10:19.867 "data_offset": 2048, 00:10:19.867 "data_size": 63488 00:10:19.867 }, 00:10:19.867 { 00:10:19.867 "name": "BaseBdev4", 00:10:19.867 "uuid": "7b0d1ee0-3d5e-5e29-b6da-2fc5dbce6c1b", 00:10:19.867 "is_configured": true, 00:10:19.867 "data_offset": 2048, 00:10:19.867 "data_size": 63488 00:10:19.867 } 00:10:19.867 ] 00:10:19.867 }' 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.867 03:10:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.436 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:20.436 03:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:20.436 [2024-11-18 03:10:23.805648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:21.376 03:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:21.376 03:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.376 03:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.376 03:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.376 03:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:21.376 03:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:21.376 03:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:21.376 03:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:21.376 03:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.376 03:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.376 03:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.376 03:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.376 03:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.376 03:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.376 03:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.376 03:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.376 03:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.377 03:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.377 03:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.377 03:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.377 03:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.377 03:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.377 03:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.377 "name": "raid_bdev1", 00:10:21.377 "uuid": "0eec8061-c9a4-46db-9c3e-17cb50cd07cb", 00:10:21.377 "strip_size_kb": 64, 00:10:21.377 "state": "online", 00:10:21.377 "raid_level": "raid0", 00:10:21.377 "superblock": true, 00:10:21.377 "num_base_bdevs": 4, 00:10:21.377 "num_base_bdevs_discovered": 4, 00:10:21.377 "num_base_bdevs_operational": 4, 00:10:21.377 "base_bdevs_list": [ 00:10:21.377 { 00:10:21.377 "name": "BaseBdev1", 00:10:21.377 "uuid": "29595201-1e20-5133-96ba-f41f95f98567", 00:10:21.377 "is_configured": true, 00:10:21.377 "data_offset": 2048, 00:10:21.377 "data_size": 63488 00:10:21.377 }, 00:10:21.377 { 00:10:21.377 "name": "BaseBdev2", 00:10:21.377 "uuid": "e849011e-1cc7-50a1-8af9-838ee1e44493", 00:10:21.377 "is_configured": true, 00:10:21.377 "data_offset": 2048, 00:10:21.377 "data_size": 63488 00:10:21.377 }, 00:10:21.377 { 00:10:21.377 "name": "BaseBdev3", 00:10:21.377 "uuid": "1bf2e1cf-21c8-5d56-a55d-2c3a8b41dcaa", 00:10:21.377 "is_configured": true, 00:10:21.377 "data_offset": 2048, 00:10:21.377 "data_size": 63488 00:10:21.377 }, 00:10:21.377 { 00:10:21.377 "name": "BaseBdev4", 00:10:21.377 "uuid": "7b0d1ee0-3d5e-5e29-b6da-2fc5dbce6c1b", 00:10:21.377 "is_configured": true, 00:10:21.377 "data_offset": 2048, 00:10:21.377 "data_size": 63488 00:10:21.377 } 00:10:21.377 ] 00:10:21.377 }' 00:10:21.377 03:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.377 03:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.646 03:10:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:21.646 03:10:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.646 03:10:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.646 [2024-11-18 03:10:25.185551] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:21.646 [2024-11-18 03:10:25.185647] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:21.646 [2024-11-18 03:10:25.188549] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.646 [2024-11-18 03:10:25.188638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.646 [2024-11-18 03:10:25.188706] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:21.646 [2024-11-18 03:10:25.188768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:21.646 { 00:10:21.646 "results": [ 00:10:21.646 { 00:10:21.646 "job": "raid_bdev1", 00:10:21.646 "core_mask": "0x1", 00:10:21.646 "workload": "randrw", 00:10:21.646 "percentage": 50, 00:10:21.646 "status": "finished", 00:10:21.646 "queue_depth": 1, 00:10:21.646 "io_size": 131072, 00:10:21.646 "runtime": 1.380785, 00:10:21.646 "iops": 15913.411573851106, 00:10:21.646 "mibps": 1989.1764467313883, 00:10:21.646 "io_failed": 1, 00:10:21.646 "io_timeout": 0, 00:10:21.646 "avg_latency_us": 87.17695823925297, 00:10:21.646 "min_latency_us": 26.494323144104804, 00:10:21.646 "max_latency_us": 1445.2262008733624 00:10:21.646 } 00:10:21.646 ], 00:10:21.646 "core_count": 1 00:10:21.646 } 00:10:21.646 03:10:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.646 03:10:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 82185 00:10:21.646 03:10:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 82185 ']' 00:10:21.646 03:10:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 82185 00:10:21.646 03:10:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:21.646 03:10:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:21.646 03:10:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82185 00:10:21.925 killing process with pid 82185 00:10:21.925 03:10:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:21.925 03:10:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:21.925 03:10:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82185' 00:10:21.925 03:10:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 82185 00:10:21.925 [2024-11-18 03:10:25.231461] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:21.925 03:10:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 82185 00:10:21.925 [2024-11-18 03:10:25.267865] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:22.186 03:10:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.DHetMsqVW5 00:10:22.186 03:10:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:22.186 03:10:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:22.186 03:10:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:22.186 03:10:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:22.186 03:10:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:22.186 03:10:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:22.186 03:10:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:22.186 00:10:22.186 real 0m3.333s 00:10:22.186 user 0m4.157s 00:10:22.186 sys 0m0.552s 00:10:22.186 ************************************ 00:10:22.186 END TEST raid_write_error_test 00:10:22.186 ************************************ 00:10:22.186 03:10:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:22.186 03:10:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.186 03:10:25 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:22.186 03:10:25 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:22.186 03:10:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:22.186 03:10:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:22.186 03:10:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:22.186 ************************************ 00:10:22.186 START TEST raid_state_function_test 00:10:22.186 ************************************ 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82312 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82312' 00:10:22.186 Process raid pid: 82312 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82312 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 82312 ']' 00:10:22.186 03:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.187 03:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:22.187 03:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.187 03:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:22.187 03:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.187 [2024-11-18 03:10:25.689492] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:22.187 [2024-11-18 03:10:25.689693] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.446 [2024-11-18 03:10:25.850448] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.446 [2024-11-18 03:10:25.901163] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.446 [2024-11-18 03:10:25.944308] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.446 [2024-11-18 03:10:25.944427] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.016 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:23.016 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:23.016 03:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:23.016 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.016 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.016 [2024-11-18 03:10:26.542155] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:23.016 [2024-11-18 03:10:26.542275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:23.016 [2024-11-18 03:10:26.542309] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:23.016 [2024-11-18 03:10:26.542334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:23.016 [2024-11-18 03:10:26.542353] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:23.016 [2024-11-18 03:10:26.542379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:23.016 [2024-11-18 03:10:26.542397] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:23.016 [2024-11-18 03:10:26.542418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:23.016 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.016 03:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:23.016 03:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.016 03:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.016 03:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:23.016 03:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.016 03:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.016 03:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.016 03:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.016 03:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.016 03:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.016 03:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.016 03:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.016 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.016 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.016 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.276 03:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.276 "name": "Existed_Raid", 00:10:23.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.276 "strip_size_kb": 64, 00:10:23.276 "state": "configuring", 00:10:23.276 "raid_level": "concat", 00:10:23.276 "superblock": false, 00:10:23.276 "num_base_bdevs": 4, 00:10:23.276 "num_base_bdevs_discovered": 0, 00:10:23.276 "num_base_bdevs_operational": 4, 00:10:23.276 "base_bdevs_list": [ 00:10:23.276 { 00:10:23.276 "name": "BaseBdev1", 00:10:23.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.276 "is_configured": false, 00:10:23.276 "data_offset": 0, 00:10:23.276 "data_size": 0 00:10:23.276 }, 00:10:23.276 { 00:10:23.276 "name": "BaseBdev2", 00:10:23.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.276 "is_configured": false, 00:10:23.276 "data_offset": 0, 00:10:23.276 "data_size": 0 00:10:23.276 }, 00:10:23.276 { 00:10:23.276 "name": "BaseBdev3", 00:10:23.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.276 "is_configured": false, 00:10:23.276 "data_offset": 0, 00:10:23.276 "data_size": 0 00:10:23.276 }, 00:10:23.276 { 00:10:23.276 "name": "BaseBdev4", 00:10:23.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.276 "is_configured": false, 00:10:23.276 "data_offset": 0, 00:10:23.276 "data_size": 0 00:10:23.276 } 00:10:23.276 ] 00:10:23.276 }' 00:10:23.276 03:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.276 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.536 03:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:23.536 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.536 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.536 [2024-11-18 03:10:26.965326] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:23.536 [2024-11-18 03:10:26.965444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:23.536 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.536 03:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:23.537 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.537 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.537 [2024-11-18 03:10:26.977344] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:23.537 [2024-11-18 03:10:26.977443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:23.537 [2024-11-18 03:10:26.977471] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:23.537 [2024-11-18 03:10:26.977493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:23.537 [2024-11-18 03:10:26.977512] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:23.537 [2024-11-18 03:10:26.977533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:23.537 [2024-11-18 03:10:26.977551] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:23.537 [2024-11-18 03:10:26.977572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:23.537 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.537 03:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:23.537 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.537 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.537 [2024-11-18 03:10:26.998297] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:23.537 BaseBdev1 00:10:23.537 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.537 03:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:23.537 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:23.537 03:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:23.537 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:23.537 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:23.537 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:23.537 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:23.537 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.537 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.537 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.537 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:23.537 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.537 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.537 [ 00:10:23.537 { 00:10:23.537 "name": "BaseBdev1", 00:10:23.537 "aliases": [ 00:10:23.537 "88615078-264c-4c32-9ac7-1f9e5c0f338e" 00:10:23.537 ], 00:10:23.537 "product_name": "Malloc disk", 00:10:23.537 "block_size": 512, 00:10:23.537 "num_blocks": 65536, 00:10:23.537 "uuid": "88615078-264c-4c32-9ac7-1f9e5c0f338e", 00:10:23.537 "assigned_rate_limits": { 00:10:23.537 "rw_ios_per_sec": 0, 00:10:23.537 "rw_mbytes_per_sec": 0, 00:10:23.537 "r_mbytes_per_sec": 0, 00:10:23.537 "w_mbytes_per_sec": 0 00:10:23.537 }, 00:10:23.537 "claimed": true, 00:10:23.537 "claim_type": "exclusive_write", 00:10:23.537 "zoned": false, 00:10:23.537 "supported_io_types": { 00:10:23.537 "read": true, 00:10:23.537 "write": true, 00:10:23.537 "unmap": true, 00:10:23.537 "flush": true, 00:10:23.537 "reset": true, 00:10:23.537 "nvme_admin": false, 00:10:23.537 "nvme_io": false, 00:10:23.537 "nvme_io_md": false, 00:10:23.537 "write_zeroes": true, 00:10:23.537 "zcopy": true, 00:10:23.537 "get_zone_info": false, 00:10:23.537 "zone_management": false, 00:10:23.537 "zone_append": false, 00:10:23.537 "compare": false, 00:10:23.537 "compare_and_write": false, 00:10:23.537 "abort": true, 00:10:23.537 "seek_hole": false, 00:10:23.537 "seek_data": false, 00:10:23.537 "copy": true, 00:10:23.537 "nvme_iov_md": false 00:10:23.537 }, 00:10:23.537 "memory_domains": [ 00:10:23.537 { 00:10:23.537 "dma_device_id": "system", 00:10:23.537 "dma_device_type": 1 00:10:23.537 }, 00:10:23.537 { 00:10:23.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.537 "dma_device_type": 2 00:10:23.537 } 00:10:23.537 ], 00:10:23.537 "driver_specific": {} 00:10:23.537 } 00:10:23.537 ] 00:10:23.537 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.537 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:23.537 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:23.537 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.537 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.537 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:23.537 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.537 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.537 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.537 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.537 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.537 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.537 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.537 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.537 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.537 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.537 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.537 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.537 "name": "Existed_Raid", 00:10:23.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.537 "strip_size_kb": 64, 00:10:23.537 "state": "configuring", 00:10:23.537 "raid_level": "concat", 00:10:23.537 "superblock": false, 00:10:23.537 "num_base_bdevs": 4, 00:10:23.537 "num_base_bdevs_discovered": 1, 00:10:23.537 "num_base_bdevs_operational": 4, 00:10:23.537 "base_bdevs_list": [ 00:10:23.537 { 00:10:23.537 "name": "BaseBdev1", 00:10:23.537 "uuid": "88615078-264c-4c32-9ac7-1f9e5c0f338e", 00:10:23.537 "is_configured": true, 00:10:23.537 "data_offset": 0, 00:10:23.537 "data_size": 65536 00:10:23.537 }, 00:10:23.537 { 00:10:23.537 "name": "BaseBdev2", 00:10:23.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.537 "is_configured": false, 00:10:23.537 "data_offset": 0, 00:10:23.537 "data_size": 0 00:10:23.537 }, 00:10:23.537 { 00:10:23.537 "name": "BaseBdev3", 00:10:23.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.537 "is_configured": false, 00:10:23.537 "data_offset": 0, 00:10:23.537 "data_size": 0 00:10:23.537 }, 00:10:23.537 { 00:10:23.537 "name": "BaseBdev4", 00:10:23.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.537 "is_configured": false, 00:10:23.537 "data_offset": 0, 00:10:23.537 "data_size": 0 00:10:23.537 } 00:10:23.537 ] 00:10:23.537 }' 00:10:23.537 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.537 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.108 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:24.108 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.108 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.108 [2024-11-18 03:10:27.493514] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:24.108 [2024-11-18 03:10:27.493616] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:24.108 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.108 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:24.108 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.108 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.108 [2024-11-18 03:10:27.505528] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:24.108 [2024-11-18 03:10:27.507494] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:24.108 [2024-11-18 03:10:27.507576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:24.108 [2024-11-18 03:10:27.507621] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:24.108 [2024-11-18 03:10:27.507647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:24.108 [2024-11-18 03:10:27.507668] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:24.108 [2024-11-18 03:10:27.507691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:24.108 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.108 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:24.108 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:24.108 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:24.108 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.108 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.108 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:24.108 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.108 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.108 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.108 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.108 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.108 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.108 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.108 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.108 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.108 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.108 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.108 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.108 "name": "Existed_Raid", 00:10:24.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.108 "strip_size_kb": 64, 00:10:24.108 "state": "configuring", 00:10:24.108 "raid_level": "concat", 00:10:24.108 "superblock": false, 00:10:24.108 "num_base_bdevs": 4, 00:10:24.108 "num_base_bdevs_discovered": 1, 00:10:24.108 "num_base_bdevs_operational": 4, 00:10:24.108 "base_bdevs_list": [ 00:10:24.108 { 00:10:24.108 "name": "BaseBdev1", 00:10:24.108 "uuid": "88615078-264c-4c32-9ac7-1f9e5c0f338e", 00:10:24.108 "is_configured": true, 00:10:24.108 "data_offset": 0, 00:10:24.108 "data_size": 65536 00:10:24.108 }, 00:10:24.108 { 00:10:24.108 "name": "BaseBdev2", 00:10:24.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.108 "is_configured": false, 00:10:24.108 "data_offset": 0, 00:10:24.108 "data_size": 0 00:10:24.108 }, 00:10:24.108 { 00:10:24.108 "name": "BaseBdev3", 00:10:24.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.108 "is_configured": false, 00:10:24.108 "data_offset": 0, 00:10:24.108 "data_size": 0 00:10:24.108 }, 00:10:24.108 { 00:10:24.108 "name": "BaseBdev4", 00:10:24.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.108 "is_configured": false, 00:10:24.108 "data_offset": 0, 00:10:24.108 "data_size": 0 00:10:24.108 } 00:10:24.108 ] 00:10:24.108 }' 00:10:24.108 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.108 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.678 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:24.678 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.678 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.678 [2024-11-18 03:10:27.996601] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:24.678 BaseBdev2 00:10:24.678 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.678 03:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:24.678 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:24.678 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:24.678 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:24.678 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:24.678 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:24.678 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:24.678 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.678 03:10:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.678 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.678 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:24.678 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.678 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.678 [ 00:10:24.678 { 00:10:24.678 "name": "BaseBdev2", 00:10:24.678 "aliases": [ 00:10:24.678 "194759fc-dadb-48bf-b31d-6840ec29a393" 00:10:24.678 ], 00:10:24.678 "product_name": "Malloc disk", 00:10:24.678 "block_size": 512, 00:10:24.678 "num_blocks": 65536, 00:10:24.678 "uuid": "194759fc-dadb-48bf-b31d-6840ec29a393", 00:10:24.678 "assigned_rate_limits": { 00:10:24.678 "rw_ios_per_sec": 0, 00:10:24.678 "rw_mbytes_per_sec": 0, 00:10:24.678 "r_mbytes_per_sec": 0, 00:10:24.678 "w_mbytes_per_sec": 0 00:10:24.678 }, 00:10:24.678 "claimed": true, 00:10:24.678 "claim_type": "exclusive_write", 00:10:24.678 "zoned": false, 00:10:24.678 "supported_io_types": { 00:10:24.678 "read": true, 00:10:24.678 "write": true, 00:10:24.678 "unmap": true, 00:10:24.678 "flush": true, 00:10:24.678 "reset": true, 00:10:24.678 "nvme_admin": false, 00:10:24.678 "nvme_io": false, 00:10:24.678 "nvme_io_md": false, 00:10:24.678 "write_zeroes": true, 00:10:24.678 "zcopy": true, 00:10:24.678 "get_zone_info": false, 00:10:24.678 "zone_management": false, 00:10:24.678 "zone_append": false, 00:10:24.678 "compare": false, 00:10:24.678 "compare_and_write": false, 00:10:24.678 "abort": true, 00:10:24.678 "seek_hole": false, 00:10:24.678 "seek_data": false, 00:10:24.678 "copy": true, 00:10:24.678 "nvme_iov_md": false 00:10:24.678 }, 00:10:24.678 "memory_domains": [ 00:10:24.678 { 00:10:24.678 "dma_device_id": "system", 00:10:24.678 "dma_device_type": 1 00:10:24.678 }, 00:10:24.678 { 00:10:24.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.678 "dma_device_type": 2 00:10:24.678 } 00:10:24.678 ], 00:10:24.678 "driver_specific": {} 00:10:24.678 } 00:10:24.678 ] 00:10:24.678 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.678 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:24.678 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:24.678 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:24.678 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:24.678 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.678 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.678 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:24.678 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.678 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.678 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.678 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.678 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.678 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.678 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.678 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.678 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.678 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.678 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.678 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.679 "name": "Existed_Raid", 00:10:24.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.679 "strip_size_kb": 64, 00:10:24.679 "state": "configuring", 00:10:24.679 "raid_level": "concat", 00:10:24.679 "superblock": false, 00:10:24.679 "num_base_bdevs": 4, 00:10:24.679 "num_base_bdevs_discovered": 2, 00:10:24.679 "num_base_bdevs_operational": 4, 00:10:24.679 "base_bdevs_list": [ 00:10:24.679 { 00:10:24.679 "name": "BaseBdev1", 00:10:24.679 "uuid": "88615078-264c-4c32-9ac7-1f9e5c0f338e", 00:10:24.679 "is_configured": true, 00:10:24.679 "data_offset": 0, 00:10:24.679 "data_size": 65536 00:10:24.679 }, 00:10:24.679 { 00:10:24.679 "name": "BaseBdev2", 00:10:24.679 "uuid": "194759fc-dadb-48bf-b31d-6840ec29a393", 00:10:24.679 "is_configured": true, 00:10:24.679 "data_offset": 0, 00:10:24.679 "data_size": 65536 00:10:24.679 }, 00:10:24.679 { 00:10:24.679 "name": "BaseBdev3", 00:10:24.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.679 "is_configured": false, 00:10:24.679 "data_offset": 0, 00:10:24.679 "data_size": 0 00:10:24.679 }, 00:10:24.679 { 00:10:24.679 "name": "BaseBdev4", 00:10:24.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.679 "is_configured": false, 00:10:24.679 "data_offset": 0, 00:10:24.679 "data_size": 0 00:10:24.679 } 00:10:24.679 ] 00:10:24.679 }' 00:10:24.679 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.679 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.939 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:24.939 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.939 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.939 [2024-11-18 03:10:28.499104] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:24.939 BaseBdev3 00:10:24.939 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.939 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:24.939 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:24.939 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:24.939 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:24.939 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:24.939 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:24.939 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:24.939 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.939 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.939 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.939 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:24.939 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.939 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.199 [ 00:10:25.199 { 00:10:25.199 "name": "BaseBdev3", 00:10:25.199 "aliases": [ 00:10:25.199 "c2413940-39e8-423b-a427-cb19b0d746db" 00:10:25.199 ], 00:10:25.199 "product_name": "Malloc disk", 00:10:25.199 "block_size": 512, 00:10:25.199 "num_blocks": 65536, 00:10:25.199 "uuid": "c2413940-39e8-423b-a427-cb19b0d746db", 00:10:25.199 "assigned_rate_limits": { 00:10:25.199 "rw_ios_per_sec": 0, 00:10:25.199 "rw_mbytes_per_sec": 0, 00:10:25.199 "r_mbytes_per_sec": 0, 00:10:25.199 "w_mbytes_per_sec": 0 00:10:25.199 }, 00:10:25.199 "claimed": true, 00:10:25.199 "claim_type": "exclusive_write", 00:10:25.199 "zoned": false, 00:10:25.199 "supported_io_types": { 00:10:25.199 "read": true, 00:10:25.199 "write": true, 00:10:25.199 "unmap": true, 00:10:25.199 "flush": true, 00:10:25.199 "reset": true, 00:10:25.199 "nvme_admin": false, 00:10:25.199 "nvme_io": false, 00:10:25.199 "nvme_io_md": false, 00:10:25.199 "write_zeroes": true, 00:10:25.199 "zcopy": true, 00:10:25.199 "get_zone_info": false, 00:10:25.199 "zone_management": false, 00:10:25.199 "zone_append": false, 00:10:25.199 "compare": false, 00:10:25.199 "compare_and_write": false, 00:10:25.199 "abort": true, 00:10:25.199 "seek_hole": false, 00:10:25.199 "seek_data": false, 00:10:25.199 "copy": true, 00:10:25.199 "nvme_iov_md": false 00:10:25.199 }, 00:10:25.199 "memory_domains": [ 00:10:25.199 { 00:10:25.199 "dma_device_id": "system", 00:10:25.199 "dma_device_type": 1 00:10:25.199 }, 00:10:25.199 { 00:10:25.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.199 "dma_device_type": 2 00:10:25.199 } 00:10:25.199 ], 00:10:25.199 "driver_specific": {} 00:10:25.199 } 00:10:25.199 ] 00:10:25.199 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.199 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:25.199 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:25.199 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:25.199 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:25.199 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.199 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.199 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.199 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.199 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.199 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.199 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.199 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.199 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.199 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.199 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.200 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.200 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.200 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.200 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.200 "name": "Existed_Raid", 00:10:25.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.200 "strip_size_kb": 64, 00:10:25.200 "state": "configuring", 00:10:25.200 "raid_level": "concat", 00:10:25.200 "superblock": false, 00:10:25.200 "num_base_bdevs": 4, 00:10:25.200 "num_base_bdevs_discovered": 3, 00:10:25.200 "num_base_bdevs_operational": 4, 00:10:25.200 "base_bdevs_list": [ 00:10:25.200 { 00:10:25.200 "name": "BaseBdev1", 00:10:25.200 "uuid": "88615078-264c-4c32-9ac7-1f9e5c0f338e", 00:10:25.200 "is_configured": true, 00:10:25.200 "data_offset": 0, 00:10:25.200 "data_size": 65536 00:10:25.200 }, 00:10:25.200 { 00:10:25.200 "name": "BaseBdev2", 00:10:25.200 "uuid": "194759fc-dadb-48bf-b31d-6840ec29a393", 00:10:25.200 "is_configured": true, 00:10:25.200 "data_offset": 0, 00:10:25.200 "data_size": 65536 00:10:25.200 }, 00:10:25.200 { 00:10:25.200 "name": "BaseBdev3", 00:10:25.200 "uuid": "c2413940-39e8-423b-a427-cb19b0d746db", 00:10:25.200 "is_configured": true, 00:10:25.200 "data_offset": 0, 00:10:25.200 "data_size": 65536 00:10:25.200 }, 00:10:25.200 { 00:10:25.200 "name": "BaseBdev4", 00:10:25.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.200 "is_configured": false, 00:10:25.200 "data_offset": 0, 00:10:25.200 "data_size": 0 00:10:25.200 } 00:10:25.200 ] 00:10:25.200 }' 00:10:25.200 03:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.200 03:10:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.460 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:25.460 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.460 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.460 [2024-11-18 03:10:29.021409] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:25.460 [2024-11-18 03:10:29.021538] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:25.460 [2024-11-18 03:10:29.021574] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:25.460 [2024-11-18 03:10:29.021882] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:25.460 [2024-11-18 03:10:29.022073] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:25.460 [2024-11-18 03:10:29.022126] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:25.460 [2024-11-18 03:10:29.022361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.460 BaseBdev4 00:10:25.460 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.460 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:25.460 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:25.460 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:25.460 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:25.460 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:25.460 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:25.460 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:25.460 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.460 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.460 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.460 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:25.460 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.720 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.720 [ 00:10:25.720 { 00:10:25.720 "name": "BaseBdev4", 00:10:25.720 "aliases": [ 00:10:25.720 "3ff23788-7459-458e-a4c7-ba8476ca739e" 00:10:25.720 ], 00:10:25.720 "product_name": "Malloc disk", 00:10:25.720 "block_size": 512, 00:10:25.720 "num_blocks": 65536, 00:10:25.720 "uuid": "3ff23788-7459-458e-a4c7-ba8476ca739e", 00:10:25.720 "assigned_rate_limits": { 00:10:25.720 "rw_ios_per_sec": 0, 00:10:25.720 "rw_mbytes_per_sec": 0, 00:10:25.720 "r_mbytes_per_sec": 0, 00:10:25.720 "w_mbytes_per_sec": 0 00:10:25.720 }, 00:10:25.720 "claimed": true, 00:10:25.720 "claim_type": "exclusive_write", 00:10:25.720 "zoned": false, 00:10:25.720 "supported_io_types": { 00:10:25.720 "read": true, 00:10:25.720 "write": true, 00:10:25.720 "unmap": true, 00:10:25.720 "flush": true, 00:10:25.720 "reset": true, 00:10:25.720 "nvme_admin": false, 00:10:25.720 "nvme_io": false, 00:10:25.720 "nvme_io_md": false, 00:10:25.720 "write_zeroes": true, 00:10:25.720 "zcopy": true, 00:10:25.720 "get_zone_info": false, 00:10:25.720 "zone_management": false, 00:10:25.720 "zone_append": false, 00:10:25.720 "compare": false, 00:10:25.720 "compare_and_write": false, 00:10:25.720 "abort": true, 00:10:25.720 "seek_hole": false, 00:10:25.720 "seek_data": false, 00:10:25.720 "copy": true, 00:10:25.720 "nvme_iov_md": false 00:10:25.720 }, 00:10:25.720 "memory_domains": [ 00:10:25.720 { 00:10:25.720 "dma_device_id": "system", 00:10:25.720 "dma_device_type": 1 00:10:25.720 }, 00:10:25.720 { 00:10:25.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.720 "dma_device_type": 2 00:10:25.720 } 00:10:25.720 ], 00:10:25.720 "driver_specific": {} 00:10:25.720 } 00:10:25.720 ] 00:10:25.720 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.720 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:25.720 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:25.720 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:25.720 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:25.720 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.720 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.720 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.720 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.720 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.720 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.720 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.720 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.720 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.720 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.720 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.720 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.720 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.720 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.720 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.720 "name": "Existed_Raid", 00:10:25.720 "uuid": "754b4bc9-cf50-40aa-beb0-4caba369c38e", 00:10:25.720 "strip_size_kb": 64, 00:10:25.720 "state": "online", 00:10:25.720 "raid_level": "concat", 00:10:25.720 "superblock": false, 00:10:25.720 "num_base_bdevs": 4, 00:10:25.720 "num_base_bdevs_discovered": 4, 00:10:25.720 "num_base_bdevs_operational": 4, 00:10:25.720 "base_bdevs_list": [ 00:10:25.720 { 00:10:25.720 "name": "BaseBdev1", 00:10:25.720 "uuid": "88615078-264c-4c32-9ac7-1f9e5c0f338e", 00:10:25.720 "is_configured": true, 00:10:25.720 "data_offset": 0, 00:10:25.720 "data_size": 65536 00:10:25.720 }, 00:10:25.720 { 00:10:25.720 "name": "BaseBdev2", 00:10:25.720 "uuid": "194759fc-dadb-48bf-b31d-6840ec29a393", 00:10:25.720 "is_configured": true, 00:10:25.720 "data_offset": 0, 00:10:25.720 "data_size": 65536 00:10:25.720 }, 00:10:25.720 { 00:10:25.720 "name": "BaseBdev3", 00:10:25.720 "uuid": "c2413940-39e8-423b-a427-cb19b0d746db", 00:10:25.720 "is_configured": true, 00:10:25.720 "data_offset": 0, 00:10:25.720 "data_size": 65536 00:10:25.720 }, 00:10:25.720 { 00:10:25.720 "name": "BaseBdev4", 00:10:25.720 "uuid": "3ff23788-7459-458e-a4c7-ba8476ca739e", 00:10:25.720 "is_configured": true, 00:10:25.721 "data_offset": 0, 00:10:25.721 "data_size": 65536 00:10:25.721 } 00:10:25.721 ] 00:10:25.721 }' 00:10:25.721 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.721 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.981 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:25.981 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:25.981 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:25.981 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:25.981 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:25.981 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:25.981 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:25.981 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:25.981 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.981 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.981 [2024-11-18 03:10:29.501036] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.981 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.981 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:25.981 "name": "Existed_Raid", 00:10:25.981 "aliases": [ 00:10:25.981 "754b4bc9-cf50-40aa-beb0-4caba369c38e" 00:10:25.981 ], 00:10:25.981 "product_name": "Raid Volume", 00:10:25.981 "block_size": 512, 00:10:25.981 "num_blocks": 262144, 00:10:25.981 "uuid": "754b4bc9-cf50-40aa-beb0-4caba369c38e", 00:10:25.981 "assigned_rate_limits": { 00:10:25.981 "rw_ios_per_sec": 0, 00:10:25.981 "rw_mbytes_per_sec": 0, 00:10:25.981 "r_mbytes_per_sec": 0, 00:10:25.981 "w_mbytes_per_sec": 0 00:10:25.981 }, 00:10:25.981 "claimed": false, 00:10:25.981 "zoned": false, 00:10:25.981 "supported_io_types": { 00:10:25.981 "read": true, 00:10:25.981 "write": true, 00:10:25.981 "unmap": true, 00:10:25.981 "flush": true, 00:10:25.981 "reset": true, 00:10:25.981 "nvme_admin": false, 00:10:25.981 "nvme_io": false, 00:10:25.981 "nvme_io_md": false, 00:10:25.981 "write_zeroes": true, 00:10:25.981 "zcopy": false, 00:10:25.981 "get_zone_info": false, 00:10:25.981 "zone_management": false, 00:10:25.981 "zone_append": false, 00:10:25.981 "compare": false, 00:10:25.981 "compare_and_write": false, 00:10:25.981 "abort": false, 00:10:25.981 "seek_hole": false, 00:10:25.981 "seek_data": false, 00:10:25.981 "copy": false, 00:10:25.981 "nvme_iov_md": false 00:10:25.981 }, 00:10:25.981 "memory_domains": [ 00:10:25.981 { 00:10:25.981 "dma_device_id": "system", 00:10:25.981 "dma_device_type": 1 00:10:25.981 }, 00:10:25.981 { 00:10:25.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.981 "dma_device_type": 2 00:10:25.981 }, 00:10:25.981 { 00:10:25.981 "dma_device_id": "system", 00:10:25.981 "dma_device_type": 1 00:10:25.981 }, 00:10:25.981 { 00:10:25.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.981 "dma_device_type": 2 00:10:25.981 }, 00:10:25.981 { 00:10:25.981 "dma_device_id": "system", 00:10:25.981 "dma_device_type": 1 00:10:25.981 }, 00:10:25.981 { 00:10:25.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.981 "dma_device_type": 2 00:10:25.981 }, 00:10:25.981 { 00:10:25.981 "dma_device_id": "system", 00:10:25.981 "dma_device_type": 1 00:10:25.981 }, 00:10:25.981 { 00:10:25.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.981 "dma_device_type": 2 00:10:25.981 } 00:10:25.981 ], 00:10:25.981 "driver_specific": { 00:10:25.981 "raid": { 00:10:25.981 "uuid": "754b4bc9-cf50-40aa-beb0-4caba369c38e", 00:10:25.981 "strip_size_kb": 64, 00:10:25.981 "state": "online", 00:10:25.981 "raid_level": "concat", 00:10:25.981 "superblock": false, 00:10:25.981 "num_base_bdevs": 4, 00:10:25.981 "num_base_bdevs_discovered": 4, 00:10:25.981 "num_base_bdevs_operational": 4, 00:10:25.981 "base_bdevs_list": [ 00:10:25.981 { 00:10:25.981 "name": "BaseBdev1", 00:10:25.981 "uuid": "88615078-264c-4c32-9ac7-1f9e5c0f338e", 00:10:25.981 "is_configured": true, 00:10:25.981 "data_offset": 0, 00:10:25.981 "data_size": 65536 00:10:25.981 }, 00:10:25.981 { 00:10:25.981 "name": "BaseBdev2", 00:10:25.981 "uuid": "194759fc-dadb-48bf-b31d-6840ec29a393", 00:10:25.981 "is_configured": true, 00:10:25.981 "data_offset": 0, 00:10:25.981 "data_size": 65536 00:10:25.981 }, 00:10:25.981 { 00:10:25.981 "name": "BaseBdev3", 00:10:25.981 "uuid": "c2413940-39e8-423b-a427-cb19b0d746db", 00:10:25.981 "is_configured": true, 00:10:25.981 "data_offset": 0, 00:10:25.981 "data_size": 65536 00:10:25.981 }, 00:10:25.981 { 00:10:25.981 "name": "BaseBdev4", 00:10:25.981 "uuid": "3ff23788-7459-458e-a4c7-ba8476ca739e", 00:10:25.981 "is_configured": true, 00:10:25.981 "data_offset": 0, 00:10:25.981 "data_size": 65536 00:10:25.981 } 00:10:25.981 ] 00:10:25.981 } 00:10:25.981 } 00:10:25.981 }' 00:10:25.981 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:26.242 BaseBdev2 00:10:26.242 BaseBdev3 00:10:26.242 BaseBdev4' 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.242 [2024-11-18 03:10:29.772276] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:26.242 [2024-11-18 03:10:29.772371] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:26.242 [2024-11-18 03:10:29.772483] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.242 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.503 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.503 "name": "Existed_Raid", 00:10:26.503 "uuid": "754b4bc9-cf50-40aa-beb0-4caba369c38e", 00:10:26.503 "strip_size_kb": 64, 00:10:26.503 "state": "offline", 00:10:26.503 "raid_level": "concat", 00:10:26.503 "superblock": false, 00:10:26.503 "num_base_bdevs": 4, 00:10:26.503 "num_base_bdevs_discovered": 3, 00:10:26.503 "num_base_bdevs_operational": 3, 00:10:26.503 "base_bdevs_list": [ 00:10:26.503 { 00:10:26.503 "name": null, 00:10:26.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.503 "is_configured": false, 00:10:26.503 "data_offset": 0, 00:10:26.503 "data_size": 65536 00:10:26.503 }, 00:10:26.503 { 00:10:26.503 "name": "BaseBdev2", 00:10:26.503 "uuid": "194759fc-dadb-48bf-b31d-6840ec29a393", 00:10:26.503 "is_configured": true, 00:10:26.503 "data_offset": 0, 00:10:26.503 "data_size": 65536 00:10:26.503 }, 00:10:26.503 { 00:10:26.503 "name": "BaseBdev3", 00:10:26.503 "uuid": "c2413940-39e8-423b-a427-cb19b0d746db", 00:10:26.503 "is_configured": true, 00:10:26.503 "data_offset": 0, 00:10:26.503 "data_size": 65536 00:10:26.503 }, 00:10:26.503 { 00:10:26.503 "name": "BaseBdev4", 00:10:26.503 "uuid": "3ff23788-7459-458e-a4c7-ba8476ca739e", 00:10:26.503 "is_configured": true, 00:10:26.503 "data_offset": 0, 00:10:26.503 "data_size": 65536 00:10:26.503 } 00:10:26.503 ] 00:10:26.503 }' 00:10:26.503 03:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.503 03:10:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.763 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:26.763 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:26.763 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.763 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:26.763 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.763 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.763 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.763 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:26.763 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:26.763 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:26.763 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.763 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.763 [2024-11-18 03:10:30.267394] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:26.763 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.763 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:26.764 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:26.764 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.764 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:26.764 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.764 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.764 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.764 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:26.764 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:26.764 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:26.764 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.764 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.764 [2024-11-18 03:10:30.338662] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.024 [2024-11-18 03:10:30.402016] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:27.024 [2024-11-18 03:10:30.402105] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.024 BaseBdev2 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:27.024 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.025 [ 00:10:27.025 { 00:10:27.025 "name": "BaseBdev2", 00:10:27.025 "aliases": [ 00:10:27.025 "dfbe1c6f-94c2-4607-b9af-8a9176fdc14d" 00:10:27.025 ], 00:10:27.025 "product_name": "Malloc disk", 00:10:27.025 "block_size": 512, 00:10:27.025 "num_blocks": 65536, 00:10:27.025 "uuid": "dfbe1c6f-94c2-4607-b9af-8a9176fdc14d", 00:10:27.025 "assigned_rate_limits": { 00:10:27.025 "rw_ios_per_sec": 0, 00:10:27.025 "rw_mbytes_per_sec": 0, 00:10:27.025 "r_mbytes_per_sec": 0, 00:10:27.025 "w_mbytes_per_sec": 0 00:10:27.025 }, 00:10:27.025 "claimed": false, 00:10:27.025 "zoned": false, 00:10:27.025 "supported_io_types": { 00:10:27.025 "read": true, 00:10:27.025 "write": true, 00:10:27.025 "unmap": true, 00:10:27.025 "flush": true, 00:10:27.025 "reset": true, 00:10:27.025 "nvme_admin": false, 00:10:27.025 "nvme_io": false, 00:10:27.025 "nvme_io_md": false, 00:10:27.025 "write_zeroes": true, 00:10:27.025 "zcopy": true, 00:10:27.025 "get_zone_info": false, 00:10:27.025 "zone_management": false, 00:10:27.025 "zone_append": false, 00:10:27.025 "compare": false, 00:10:27.025 "compare_and_write": false, 00:10:27.025 "abort": true, 00:10:27.025 "seek_hole": false, 00:10:27.025 "seek_data": false, 00:10:27.025 "copy": true, 00:10:27.025 "nvme_iov_md": false 00:10:27.025 }, 00:10:27.025 "memory_domains": [ 00:10:27.025 { 00:10:27.025 "dma_device_id": "system", 00:10:27.025 "dma_device_type": 1 00:10:27.025 }, 00:10:27.025 { 00:10:27.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.025 "dma_device_type": 2 00:10:27.025 } 00:10:27.025 ], 00:10:27.025 "driver_specific": {} 00:10:27.025 } 00:10:27.025 ] 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.025 BaseBdev3 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.025 [ 00:10:27.025 { 00:10:27.025 "name": "BaseBdev3", 00:10:27.025 "aliases": [ 00:10:27.025 "2bfaedaf-59dd-4947-b046-2ef789c98cdb" 00:10:27.025 ], 00:10:27.025 "product_name": "Malloc disk", 00:10:27.025 "block_size": 512, 00:10:27.025 "num_blocks": 65536, 00:10:27.025 "uuid": "2bfaedaf-59dd-4947-b046-2ef789c98cdb", 00:10:27.025 "assigned_rate_limits": { 00:10:27.025 "rw_ios_per_sec": 0, 00:10:27.025 "rw_mbytes_per_sec": 0, 00:10:27.025 "r_mbytes_per_sec": 0, 00:10:27.025 "w_mbytes_per_sec": 0 00:10:27.025 }, 00:10:27.025 "claimed": false, 00:10:27.025 "zoned": false, 00:10:27.025 "supported_io_types": { 00:10:27.025 "read": true, 00:10:27.025 "write": true, 00:10:27.025 "unmap": true, 00:10:27.025 "flush": true, 00:10:27.025 "reset": true, 00:10:27.025 "nvme_admin": false, 00:10:27.025 "nvme_io": false, 00:10:27.025 "nvme_io_md": false, 00:10:27.025 "write_zeroes": true, 00:10:27.025 "zcopy": true, 00:10:27.025 "get_zone_info": false, 00:10:27.025 "zone_management": false, 00:10:27.025 "zone_append": false, 00:10:27.025 "compare": false, 00:10:27.025 "compare_and_write": false, 00:10:27.025 "abort": true, 00:10:27.025 "seek_hole": false, 00:10:27.025 "seek_data": false, 00:10:27.025 "copy": true, 00:10:27.025 "nvme_iov_md": false 00:10:27.025 }, 00:10:27.025 "memory_domains": [ 00:10:27.025 { 00:10:27.025 "dma_device_id": "system", 00:10:27.025 "dma_device_type": 1 00:10:27.025 }, 00:10:27.025 { 00:10:27.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.025 "dma_device_type": 2 00:10:27.025 } 00:10:27.025 ], 00:10:27.025 "driver_specific": {} 00:10:27.025 } 00:10:27.025 ] 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.025 BaseBdev4 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.025 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.286 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.286 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:27.286 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.286 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.286 [ 00:10:27.286 { 00:10:27.286 "name": "BaseBdev4", 00:10:27.286 "aliases": [ 00:10:27.286 "05c546d8-79bd-44d0-89dc-926a12534c6e" 00:10:27.286 ], 00:10:27.286 "product_name": "Malloc disk", 00:10:27.286 "block_size": 512, 00:10:27.286 "num_blocks": 65536, 00:10:27.286 "uuid": "05c546d8-79bd-44d0-89dc-926a12534c6e", 00:10:27.286 "assigned_rate_limits": { 00:10:27.286 "rw_ios_per_sec": 0, 00:10:27.286 "rw_mbytes_per_sec": 0, 00:10:27.286 "r_mbytes_per_sec": 0, 00:10:27.286 "w_mbytes_per_sec": 0 00:10:27.286 }, 00:10:27.286 "claimed": false, 00:10:27.286 "zoned": false, 00:10:27.286 "supported_io_types": { 00:10:27.286 "read": true, 00:10:27.286 "write": true, 00:10:27.286 "unmap": true, 00:10:27.286 "flush": true, 00:10:27.286 "reset": true, 00:10:27.286 "nvme_admin": false, 00:10:27.286 "nvme_io": false, 00:10:27.286 "nvme_io_md": false, 00:10:27.286 "write_zeroes": true, 00:10:27.286 "zcopy": true, 00:10:27.286 "get_zone_info": false, 00:10:27.286 "zone_management": false, 00:10:27.286 "zone_append": false, 00:10:27.286 "compare": false, 00:10:27.286 "compare_and_write": false, 00:10:27.286 "abort": true, 00:10:27.286 "seek_hole": false, 00:10:27.286 "seek_data": false, 00:10:27.286 "copy": true, 00:10:27.286 "nvme_iov_md": false 00:10:27.286 }, 00:10:27.286 "memory_domains": [ 00:10:27.286 { 00:10:27.286 "dma_device_id": "system", 00:10:27.286 "dma_device_type": 1 00:10:27.286 }, 00:10:27.286 { 00:10:27.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.286 "dma_device_type": 2 00:10:27.286 } 00:10:27.286 ], 00:10:27.286 "driver_specific": {} 00:10:27.286 } 00:10:27.286 ] 00:10:27.286 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.286 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:27.286 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:27.286 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:27.286 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:27.286 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.286 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.286 [2024-11-18 03:10:30.631426] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:27.286 [2024-11-18 03:10:30.631518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:27.286 [2024-11-18 03:10:30.631578] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:27.286 [2024-11-18 03:10:30.633501] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:27.286 [2024-11-18 03:10:30.633591] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:27.286 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.286 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:27.286 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.286 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.286 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.286 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.286 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.286 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.286 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.286 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.286 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.286 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.286 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.286 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.286 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.286 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.286 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.286 "name": "Existed_Raid", 00:10:27.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.286 "strip_size_kb": 64, 00:10:27.286 "state": "configuring", 00:10:27.286 "raid_level": "concat", 00:10:27.286 "superblock": false, 00:10:27.286 "num_base_bdevs": 4, 00:10:27.286 "num_base_bdevs_discovered": 3, 00:10:27.286 "num_base_bdevs_operational": 4, 00:10:27.286 "base_bdevs_list": [ 00:10:27.286 { 00:10:27.286 "name": "BaseBdev1", 00:10:27.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.286 "is_configured": false, 00:10:27.286 "data_offset": 0, 00:10:27.286 "data_size": 0 00:10:27.286 }, 00:10:27.286 { 00:10:27.286 "name": "BaseBdev2", 00:10:27.286 "uuid": "dfbe1c6f-94c2-4607-b9af-8a9176fdc14d", 00:10:27.286 "is_configured": true, 00:10:27.286 "data_offset": 0, 00:10:27.286 "data_size": 65536 00:10:27.286 }, 00:10:27.286 { 00:10:27.286 "name": "BaseBdev3", 00:10:27.286 "uuid": "2bfaedaf-59dd-4947-b046-2ef789c98cdb", 00:10:27.286 "is_configured": true, 00:10:27.286 "data_offset": 0, 00:10:27.286 "data_size": 65536 00:10:27.286 }, 00:10:27.286 { 00:10:27.286 "name": "BaseBdev4", 00:10:27.286 "uuid": "05c546d8-79bd-44d0-89dc-926a12534c6e", 00:10:27.286 "is_configured": true, 00:10:27.286 "data_offset": 0, 00:10:27.286 "data_size": 65536 00:10:27.286 } 00:10:27.286 ] 00:10:27.286 }' 00:10:27.286 03:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.286 03:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.547 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:27.547 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.547 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.547 [2024-11-18 03:10:31.086681] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:27.547 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.547 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:27.547 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.547 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.547 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.547 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.547 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.547 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.547 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.547 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.547 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.547 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.547 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.547 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.547 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.547 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.807 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.807 "name": "Existed_Raid", 00:10:27.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.807 "strip_size_kb": 64, 00:10:27.807 "state": "configuring", 00:10:27.807 "raid_level": "concat", 00:10:27.807 "superblock": false, 00:10:27.807 "num_base_bdevs": 4, 00:10:27.807 "num_base_bdevs_discovered": 2, 00:10:27.807 "num_base_bdevs_operational": 4, 00:10:27.807 "base_bdevs_list": [ 00:10:27.807 { 00:10:27.807 "name": "BaseBdev1", 00:10:27.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.807 "is_configured": false, 00:10:27.807 "data_offset": 0, 00:10:27.807 "data_size": 0 00:10:27.807 }, 00:10:27.807 { 00:10:27.807 "name": null, 00:10:27.807 "uuid": "dfbe1c6f-94c2-4607-b9af-8a9176fdc14d", 00:10:27.807 "is_configured": false, 00:10:27.807 "data_offset": 0, 00:10:27.807 "data_size": 65536 00:10:27.807 }, 00:10:27.807 { 00:10:27.807 "name": "BaseBdev3", 00:10:27.807 "uuid": "2bfaedaf-59dd-4947-b046-2ef789c98cdb", 00:10:27.807 "is_configured": true, 00:10:27.807 "data_offset": 0, 00:10:27.807 "data_size": 65536 00:10:27.807 }, 00:10:27.807 { 00:10:27.807 "name": "BaseBdev4", 00:10:27.808 "uuid": "05c546d8-79bd-44d0-89dc-926a12534c6e", 00:10:27.808 "is_configured": true, 00:10:27.808 "data_offset": 0, 00:10:27.808 "data_size": 65536 00:10:27.808 } 00:10:27.808 ] 00:10:27.808 }' 00:10:27.808 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.808 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.068 [2024-11-18 03:10:31.601024] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:28.068 BaseBdev1 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.068 [ 00:10:28.068 { 00:10:28.068 "name": "BaseBdev1", 00:10:28.068 "aliases": [ 00:10:28.068 "4c1806ad-6616-47af-bd9b-f5d1ef75bf0c" 00:10:28.068 ], 00:10:28.068 "product_name": "Malloc disk", 00:10:28.068 "block_size": 512, 00:10:28.068 "num_blocks": 65536, 00:10:28.068 "uuid": "4c1806ad-6616-47af-bd9b-f5d1ef75bf0c", 00:10:28.068 "assigned_rate_limits": { 00:10:28.068 "rw_ios_per_sec": 0, 00:10:28.068 "rw_mbytes_per_sec": 0, 00:10:28.068 "r_mbytes_per_sec": 0, 00:10:28.068 "w_mbytes_per_sec": 0 00:10:28.068 }, 00:10:28.068 "claimed": true, 00:10:28.068 "claim_type": "exclusive_write", 00:10:28.068 "zoned": false, 00:10:28.068 "supported_io_types": { 00:10:28.068 "read": true, 00:10:28.068 "write": true, 00:10:28.068 "unmap": true, 00:10:28.068 "flush": true, 00:10:28.068 "reset": true, 00:10:28.068 "nvme_admin": false, 00:10:28.068 "nvme_io": false, 00:10:28.068 "nvme_io_md": false, 00:10:28.068 "write_zeroes": true, 00:10:28.068 "zcopy": true, 00:10:28.068 "get_zone_info": false, 00:10:28.068 "zone_management": false, 00:10:28.068 "zone_append": false, 00:10:28.068 "compare": false, 00:10:28.068 "compare_and_write": false, 00:10:28.068 "abort": true, 00:10:28.068 "seek_hole": false, 00:10:28.068 "seek_data": false, 00:10:28.068 "copy": true, 00:10:28.068 "nvme_iov_md": false 00:10:28.068 }, 00:10:28.068 "memory_domains": [ 00:10:28.068 { 00:10:28.068 "dma_device_id": "system", 00:10:28.068 "dma_device_type": 1 00:10:28.068 }, 00:10:28.068 { 00:10:28.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.068 "dma_device_type": 2 00:10:28.068 } 00:10:28.068 ], 00:10:28.068 "driver_specific": {} 00:10:28.068 } 00:10:28.068 ] 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.068 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.328 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.328 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.328 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.328 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.328 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.328 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.328 "name": "Existed_Raid", 00:10:28.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.328 "strip_size_kb": 64, 00:10:28.328 "state": "configuring", 00:10:28.328 "raid_level": "concat", 00:10:28.328 "superblock": false, 00:10:28.328 "num_base_bdevs": 4, 00:10:28.328 "num_base_bdevs_discovered": 3, 00:10:28.328 "num_base_bdevs_operational": 4, 00:10:28.328 "base_bdevs_list": [ 00:10:28.328 { 00:10:28.328 "name": "BaseBdev1", 00:10:28.328 "uuid": "4c1806ad-6616-47af-bd9b-f5d1ef75bf0c", 00:10:28.328 "is_configured": true, 00:10:28.328 "data_offset": 0, 00:10:28.328 "data_size": 65536 00:10:28.328 }, 00:10:28.328 { 00:10:28.328 "name": null, 00:10:28.328 "uuid": "dfbe1c6f-94c2-4607-b9af-8a9176fdc14d", 00:10:28.328 "is_configured": false, 00:10:28.328 "data_offset": 0, 00:10:28.328 "data_size": 65536 00:10:28.328 }, 00:10:28.328 { 00:10:28.328 "name": "BaseBdev3", 00:10:28.328 "uuid": "2bfaedaf-59dd-4947-b046-2ef789c98cdb", 00:10:28.328 "is_configured": true, 00:10:28.328 "data_offset": 0, 00:10:28.328 "data_size": 65536 00:10:28.328 }, 00:10:28.328 { 00:10:28.328 "name": "BaseBdev4", 00:10:28.328 "uuid": "05c546d8-79bd-44d0-89dc-926a12534c6e", 00:10:28.328 "is_configured": true, 00:10:28.328 "data_offset": 0, 00:10:28.328 "data_size": 65536 00:10:28.328 } 00:10:28.328 ] 00:10:28.328 }' 00:10:28.328 03:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.328 03:10:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.588 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.588 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:28.588 03:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.588 03:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.588 03:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.588 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:28.588 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:28.588 03:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.588 03:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.588 [2024-11-18 03:10:32.100220] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:28.588 03:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.588 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:28.588 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.588 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.588 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:28.588 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.588 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.588 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.588 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.588 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.588 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.588 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.588 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.588 03:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.588 03:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.588 03:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.588 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.588 "name": "Existed_Raid", 00:10:28.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.588 "strip_size_kb": 64, 00:10:28.588 "state": "configuring", 00:10:28.588 "raid_level": "concat", 00:10:28.588 "superblock": false, 00:10:28.588 "num_base_bdevs": 4, 00:10:28.588 "num_base_bdevs_discovered": 2, 00:10:28.588 "num_base_bdevs_operational": 4, 00:10:28.588 "base_bdevs_list": [ 00:10:28.588 { 00:10:28.588 "name": "BaseBdev1", 00:10:28.588 "uuid": "4c1806ad-6616-47af-bd9b-f5d1ef75bf0c", 00:10:28.588 "is_configured": true, 00:10:28.588 "data_offset": 0, 00:10:28.588 "data_size": 65536 00:10:28.588 }, 00:10:28.588 { 00:10:28.588 "name": null, 00:10:28.588 "uuid": "dfbe1c6f-94c2-4607-b9af-8a9176fdc14d", 00:10:28.588 "is_configured": false, 00:10:28.588 "data_offset": 0, 00:10:28.588 "data_size": 65536 00:10:28.588 }, 00:10:28.588 { 00:10:28.589 "name": null, 00:10:28.589 "uuid": "2bfaedaf-59dd-4947-b046-2ef789c98cdb", 00:10:28.589 "is_configured": false, 00:10:28.589 "data_offset": 0, 00:10:28.589 "data_size": 65536 00:10:28.589 }, 00:10:28.589 { 00:10:28.589 "name": "BaseBdev4", 00:10:28.589 "uuid": "05c546d8-79bd-44d0-89dc-926a12534c6e", 00:10:28.589 "is_configured": true, 00:10:28.589 "data_offset": 0, 00:10:28.589 "data_size": 65536 00:10:28.589 } 00:10:28.589 ] 00:10:28.589 }' 00:10:28.589 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.589 03:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.158 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:29.159 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.159 03:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.159 03:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.159 03:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.159 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:29.159 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:29.159 03:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.159 03:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.159 [2024-11-18 03:10:32.595441] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:29.159 03:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.159 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:29.159 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.159 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.159 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:29.159 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.159 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.159 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.159 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.159 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.159 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.159 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.159 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.159 03:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.159 03:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.159 03:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.159 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.159 "name": "Existed_Raid", 00:10:29.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.159 "strip_size_kb": 64, 00:10:29.159 "state": "configuring", 00:10:29.159 "raid_level": "concat", 00:10:29.159 "superblock": false, 00:10:29.159 "num_base_bdevs": 4, 00:10:29.159 "num_base_bdevs_discovered": 3, 00:10:29.159 "num_base_bdevs_operational": 4, 00:10:29.159 "base_bdevs_list": [ 00:10:29.159 { 00:10:29.159 "name": "BaseBdev1", 00:10:29.159 "uuid": "4c1806ad-6616-47af-bd9b-f5d1ef75bf0c", 00:10:29.159 "is_configured": true, 00:10:29.159 "data_offset": 0, 00:10:29.159 "data_size": 65536 00:10:29.159 }, 00:10:29.159 { 00:10:29.159 "name": null, 00:10:29.159 "uuid": "dfbe1c6f-94c2-4607-b9af-8a9176fdc14d", 00:10:29.159 "is_configured": false, 00:10:29.159 "data_offset": 0, 00:10:29.159 "data_size": 65536 00:10:29.159 }, 00:10:29.159 { 00:10:29.159 "name": "BaseBdev3", 00:10:29.159 "uuid": "2bfaedaf-59dd-4947-b046-2ef789c98cdb", 00:10:29.159 "is_configured": true, 00:10:29.159 "data_offset": 0, 00:10:29.159 "data_size": 65536 00:10:29.159 }, 00:10:29.159 { 00:10:29.159 "name": "BaseBdev4", 00:10:29.159 "uuid": "05c546d8-79bd-44d0-89dc-926a12534c6e", 00:10:29.159 "is_configured": true, 00:10:29.159 "data_offset": 0, 00:10:29.159 "data_size": 65536 00:10:29.159 } 00:10:29.159 ] 00:10:29.159 }' 00:10:29.159 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.159 03:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.728 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:29.728 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.728 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.728 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.728 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.728 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:29.728 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:29.728 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.728 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.728 [2024-11-18 03:10:33.082628] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:29.728 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.728 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:29.728 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.728 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.728 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:29.728 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.728 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.728 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.728 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.728 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.728 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.728 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.728 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.728 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.728 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.728 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.728 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.728 "name": "Existed_Raid", 00:10:29.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.728 "strip_size_kb": 64, 00:10:29.728 "state": "configuring", 00:10:29.728 "raid_level": "concat", 00:10:29.728 "superblock": false, 00:10:29.728 "num_base_bdevs": 4, 00:10:29.728 "num_base_bdevs_discovered": 2, 00:10:29.728 "num_base_bdevs_operational": 4, 00:10:29.728 "base_bdevs_list": [ 00:10:29.728 { 00:10:29.728 "name": null, 00:10:29.728 "uuid": "4c1806ad-6616-47af-bd9b-f5d1ef75bf0c", 00:10:29.728 "is_configured": false, 00:10:29.728 "data_offset": 0, 00:10:29.728 "data_size": 65536 00:10:29.728 }, 00:10:29.728 { 00:10:29.728 "name": null, 00:10:29.728 "uuid": "dfbe1c6f-94c2-4607-b9af-8a9176fdc14d", 00:10:29.728 "is_configured": false, 00:10:29.728 "data_offset": 0, 00:10:29.728 "data_size": 65536 00:10:29.728 }, 00:10:29.728 { 00:10:29.728 "name": "BaseBdev3", 00:10:29.728 "uuid": "2bfaedaf-59dd-4947-b046-2ef789c98cdb", 00:10:29.728 "is_configured": true, 00:10:29.728 "data_offset": 0, 00:10:29.728 "data_size": 65536 00:10:29.728 }, 00:10:29.728 { 00:10:29.728 "name": "BaseBdev4", 00:10:29.728 "uuid": "05c546d8-79bd-44d0-89dc-926a12534c6e", 00:10:29.728 "is_configured": true, 00:10:29.728 "data_offset": 0, 00:10:29.728 "data_size": 65536 00:10:29.728 } 00:10:29.728 ] 00:10:29.728 }' 00:10:29.728 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.728 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.988 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.988 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:29.988 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.988 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.988 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.988 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:29.988 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:29.988 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.988 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.988 [2024-11-18 03:10:33.540569] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.988 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.988 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:29.988 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.988 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.988 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:29.988 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.988 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.988 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.988 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.988 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.988 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.988 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.988 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.988 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.988 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.248 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.248 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.248 "name": "Existed_Raid", 00:10:30.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.248 "strip_size_kb": 64, 00:10:30.248 "state": "configuring", 00:10:30.248 "raid_level": "concat", 00:10:30.248 "superblock": false, 00:10:30.248 "num_base_bdevs": 4, 00:10:30.248 "num_base_bdevs_discovered": 3, 00:10:30.248 "num_base_bdevs_operational": 4, 00:10:30.248 "base_bdevs_list": [ 00:10:30.248 { 00:10:30.248 "name": null, 00:10:30.248 "uuid": "4c1806ad-6616-47af-bd9b-f5d1ef75bf0c", 00:10:30.248 "is_configured": false, 00:10:30.248 "data_offset": 0, 00:10:30.248 "data_size": 65536 00:10:30.248 }, 00:10:30.248 { 00:10:30.248 "name": "BaseBdev2", 00:10:30.248 "uuid": "dfbe1c6f-94c2-4607-b9af-8a9176fdc14d", 00:10:30.248 "is_configured": true, 00:10:30.248 "data_offset": 0, 00:10:30.248 "data_size": 65536 00:10:30.248 }, 00:10:30.248 { 00:10:30.248 "name": "BaseBdev3", 00:10:30.248 "uuid": "2bfaedaf-59dd-4947-b046-2ef789c98cdb", 00:10:30.248 "is_configured": true, 00:10:30.248 "data_offset": 0, 00:10:30.248 "data_size": 65536 00:10:30.248 }, 00:10:30.248 { 00:10:30.248 "name": "BaseBdev4", 00:10:30.248 "uuid": "05c546d8-79bd-44d0-89dc-926a12534c6e", 00:10:30.248 "is_configured": true, 00:10:30.248 "data_offset": 0, 00:10:30.248 "data_size": 65536 00:10:30.248 } 00:10:30.248 ] 00:10:30.248 }' 00:10:30.248 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.248 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.508 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.508 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.508 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.508 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:30.508 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.508 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:30.508 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.508 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:30.508 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.508 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.508 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4c1806ad-6616-47af-bd9b-f5d1ef75bf0c 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.508 [2024-11-18 03:10:34.034839] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:30.508 [2024-11-18 03:10:34.034982] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:30.508 [2024-11-18 03:10:34.035010] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:30.508 [2024-11-18 03:10:34.035331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:30.508 NewBaseBdev 00:10:30.508 [2024-11-18 03:10:34.035503] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:30.508 [2024-11-18 03:10:34.035522] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:30.508 [2024-11-18 03:10:34.035718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.508 [ 00:10:30.508 { 00:10:30.508 "name": "NewBaseBdev", 00:10:30.508 "aliases": [ 00:10:30.508 "4c1806ad-6616-47af-bd9b-f5d1ef75bf0c" 00:10:30.508 ], 00:10:30.508 "product_name": "Malloc disk", 00:10:30.508 "block_size": 512, 00:10:30.508 "num_blocks": 65536, 00:10:30.508 "uuid": "4c1806ad-6616-47af-bd9b-f5d1ef75bf0c", 00:10:30.508 "assigned_rate_limits": { 00:10:30.508 "rw_ios_per_sec": 0, 00:10:30.508 "rw_mbytes_per_sec": 0, 00:10:30.508 "r_mbytes_per_sec": 0, 00:10:30.508 "w_mbytes_per_sec": 0 00:10:30.508 }, 00:10:30.508 "claimed": true, 00:10:30.508 "claim_type": "exclusive_write", 00:10:30.508 "zoned": false, 00:10:30.508 "supported_io_types": { 00:10:30.508 "read": true, 00:10:30.508 "write": true, 00:10:30.508 "unmap": true, 00:10:30.508 "flush": true, 00:10:30.508 "reset": true, 00:10:30.508 "nvme_admin": false, 00:10:30.508 "nvme_io": false, 00:10:30.508 "nvme_io_md": false, 00:10:30.508 "write_zeroes": true, 00:10:30.508 "zcopy": true, 00:10:30.508 "get_zone_info": false, 00:10:30.508 "zone_management": false, 00:10:30.508 "zone_append": false, 00:10:30.508 "compare": false, 00:10:30.508 "compare_and_write": false, 00:10:30.508 "abort": true, 00:10:30.508 "seek_hole": false, 00:10:30.508 "seek_data": false, 00:10:30.508 "copy": true, 00:10:30.508 "nvme_iov_md": false 00:10:30.508 }, 00:10:30.508 "memory_domains": [ 00:10:30.508 { 00:10:30.508 "dma_device_id": "system", 00:10:30.508 "dma_device_type": 1 00:10:30.508 }, 00:10:30.508 { 00:10:30.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.508 "dma_device_type": 2 00:10:30.508 } 00:10:30.508 ], 00:10:30.508 "driver_specific": {} 00:10:30.508 } 00:10:30.508 ] 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.508 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.768 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.768 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.768 "name": "Existed_Raid", 00:10:30.768 "uuid": "36fffe19-652c-4e51-818f-0c801917c2e1", 00:10:30.768 "strip_size_kb": 64, 00:10:30.768 "state": "online", 00:10:30.768 "raid_level": "concat", 00:10:30.768 "superblock": false, 00:10:30.768 "num_base_bdevs": 4, 00:10:30.768 "num_base_bdevs_discovered": 4, 00:10:30.768 "num_base_bdevs_operational": 4, 00:10:30.768 "base_bdevs_list": [ 00:10:30.768 { 00:10:30.768 "name": "NewBaseBdev", 00:10:30.768 "uuid": "4c1806ad-6616-47af-bd9b-f5d1ef75bf0c", 00:10:30.768 "is_configured": true, 00:10:30.768 "data_offset": 0, 00:10:30.768 "data_size": 65536 00:10:30.768 }, 00:10:30.768 { 00:10:30.768 "name": "BaseBdev2", 00:10:30.768 "uuid": "dfbe1c6f-94c2-4607-b9af-8a9176fdc14d", 00:10:30.768 "is_configured": true, 00:10:30.768 "data_offset": 0, 00:10:30.768 "data_size": 65536 00:10:30.768 }, 00:10:30.768 { 00:10:30.768 "name": "BaseBdev3", 00:10:30.768 "uuid": "2bfaedaf-59dd-4947-b046-2ef789c98cdb", 00:10:30.768 "is_configured": true, 00:10:30.768 "data_offset": 0, 00:10:30.768 "data_size": 65536 00:10:30.768 }, 00:10:30.768 { 00:10:30.768 "name": "BaseBdev4", 00:10:30.768 "uuid": "05c546d8-79bd-44d0-89dc-926a12534c6e", 00:10:30.768 "is_configured": true, 00:10:30.768 "data_offset": 0, 00:10:30.768 "data_size": 65536 00:10:30.768 } 00:10:30.768 ] 00:10:30.768 }' 00:10:30.768 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.768 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.029 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:31.029 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:31.029 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:31.029 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:31.029 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:31.029 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:31.029 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:31.029 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:31.029 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.029 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.029 [2024-11-18 03:10:34.522476] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:31.029 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.029 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:31.029 "name": "Existed_Raid", 00:10:31.029 "aliases": [ 00:10:31.029 "36fffe19-652c-4e51-818f-0c801917c2e1" 00:10:31.029 ], 00:10:31.029 "product_name": "Raid Volume", 00:10:31.029 "block_size": 512, 00:10:31.029 "num_blocks": 262144, 00:10:31.029 "uuid": "36fffe19-652c-4e51-818f-0c801917c2e1", 00:10:31.029 "assigned_rate_limits": { 00:10:31.029 "rw_ios_per_sec": 0, 00:10:31.029 "rw_mbytes_per_sec": 0, 00:10:31.029 "r_mbytes_per_sec": 0, 00:10:31.029 "w_mbytes_per_sec": 0 00:10:31.029 }, 00:10:31.029 "claimed": false, 00:10:31.029 "zoned": false, 00:10:31.029 "supported_io_types": { 00:10:31.029 "read": true, 00:10:31.029 "write": true, 00:10:31.029 "unmap": true, 00:10:31.029 "flush": true, 00:10:31.029 "reset": true, 00:10:31.029 "nvme_admin": false, 00:10:31.029 "nvme_io": false, 00:10:31.029 "nvme_io_md": false, 00:10:31.029 "write_zeroes": true, 00:10:31.029 "zcopy": false, 00:10:31.029 "get_zone_info": false, 00:10:31.029 "zone_management": false, 00:10:31.029 "zone_append": false, 00:10:31.029 "compare": false, 00:10:31.029 "compare_and_write": false, 00:10:31.029 "abort": false, 00:10:31.029 "seek_hole": false, 00:10:31.029 "seek_data": false, 00:10:31.029 "copy": false, 00:10:31.029 "nvme_iov_md": false 00:10:31.029 }, 00:10:31.029 "memory_domains": [ 00:10:31.029 { 00:10:31.029 "dma_device_id": "system", 00:10:31.029 "dma_device_type": 1 00:10:31.029 }, 00:10:31.029 { 00:10:31.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.029 "dma_device_type": 2 00:10:31.029 }, 00:10:31.029 { 00:10:31.029 "dma_device_id": "system", 00:10:31.029 "dma_device_type": 1 00:10:31.029 }, 00:10:31.029 { 00:10:31.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.029 "dma_device_type": 2 00:10:31.029 }, 00:10:31.029 { 00:10:31.029 "dma_device_id": "system", 00:10:31.029 "dma_device_type": 1 00:10:31.029 }, 00:10:31.029 { 00:10:31.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.029 "dma_device_type": 2 00:10:31.029 }, 00:10:31.029 { 00:10:31.029 "dma_device_id": "system", 00:10:31.029 "dma_device_type": 1 00:10:31.029 }, 00:10:31.029 { 00:10:31.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.029 "dma_device_type": 2 00:10:31.029 } 00:10:31.029 ], 00:10:31.029 "driver_specific": { 00:10:31.029 "raid": { 00:10:31.029 "uuid": "36fffe19-652c-4e51-818f-0c801917c2e1", 00:10:31.029 "strip_size_kb": 64, 00:10:31.029 "state": "online", 00:10:31.029 "raid_level": "concat", 00:10:31.029 "superblock": false, 00:10:31.029 "num_base_bdevs": 4, 00:10:31.029 "num_base_bdevs_discovered": 4, 00:10:31.029 "num_base_bdevs_operational": 4, 00:10:31.029 "base_bdevs_list": [ 00:10:31.029 { 00:10:31.029 "name": "NewBaseBdev", 00:10:31.029 "uuid": "4c1806ad-6616-47af-bd9b-f5d1ef75bf0c", 00:10:31.029 "is_configured": true, 00:10:31.029 "data_offset": 0, 00:10:31.029 "data_size": 65536 00:10:31.029 }, 00:10:31.029 { 00:10:31.029 "name": "BaseBdev2", 00:10:31.029 "uuid": "dfbe1c6f-94c2-4607-b9af-8a9176fdc14d", 00:10:31.029 "is_configured": true, 00:10:31.029 "data_offset": 0, 00:10:31.029 "data_size": 65536 00:10:31.029 }, 00:10:31.029 { 00:10:31.029 "name": "BaseBdev3", 00:10:31.029 "uuid": "2bfaedaf-59dd-4947-b046-2ef789c98cdb", 00:10:31.029 "is_configured": true, 00:10:31.029 "data_offset": 0, 00:10:31.029 "data_size": 65536 00:10:31.029 }, 00:10:31.029 { 00:10:31.029 "name": "BaseBdev4", 00:10:31.029 "uuid": "05c546d8-79bd-44d0-89dc-926a12534c6e", 00:10:31.029 "is_configured": true, 00:10:31.029 "data_offset": 0, 00:10:31.029 "data_size": 65536 00:10:31.029 } 00:10:31.029 ] 00:10:31.029 } 00:10:31.029 } 00:10:31.029 }' 00:10:31.029 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:31.029 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:31.029 BaseBdev2 00:10:31.029 BaseBdev3 00:10:31.029 BaseBdev4' 00:10:31.029 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.289 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:31.289 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.289 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:31.289 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.289 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.289 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.289 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.289 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.289 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.289 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.289 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:31.289 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.290 [2024-11-18 03:10:34.837515] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:31.290 [2024-11-18 03:10:34.837598] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:31.290 [2024-11-18 03:10:34.837730] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:31.290 [2024-11-18 03:10:34.837824] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:31.290 [2024-11-18 03:10:34.837874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82312 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 82312 ']' 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 82312 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:31.290 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82312 00:10:31.550 killing process with pid 82312 00:10:31.550 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:31.550 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:31.550 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82312' 00:10:31.550 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 82312 00:10:31.550 [2024-11-18 03:10:34.880433] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:31.550 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 82312 00:10:31.550 [2024-11-18 03:10:34.923723] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:31.810 ************************************ 00:10:31.810 END TEST raid_state_function_test 00:10:31.810 ************************************ 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:31.810 00:10:31.810 real 0m9.571s 00:10:31.810 user 0m16.321s 00:10:31.810 sys 0m2.025s 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.810 03:10:35 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:31.810 03:10:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:31.810 03:10:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:31.810 03:10:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:31.810 ************************************ 00:10:31.810 START TEST raid_state_function_test_sb 00:10:31.810 ************************************ 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:31.810 Process raid pid: 82967 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=82967 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82967' 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 82967 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 82967 ']' 00:10:31.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:31.810 03:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.810 [2024-11-18 03:10:35.328122] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:31.810 [2024-11-18 03:10:35.328335] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.070 [2024-11-18 03:10:35.490126] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.070 [2024-11-18 03:10:35.542184] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.070 [2024-11-18 03:10:35.585913] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:32.070 [2024-11-18 03:10:35.585989] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:32.639 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:32.639 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:32.639 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:32.639 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.639 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.639 [2024-11-18 03:10:36.176175] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:32.639 [2024-11-18 03:10:36.176300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:32.639 [2024-11-18 03:10:36.176341] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:32.639 [2024-11-18 03:10:36.176411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:32.639 [2024-11-18 03:10:36.176435] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:32.639 [2024-11-18 03:10:36.176481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:32.639 [2024-11-18 03:10:36.176509] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:32.639 [2024-11-18 03:10:36.176553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:32.639 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.639 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:32.639 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.639 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.639 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:32.639 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.639 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.639 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.639 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.639 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.640 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.640 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.640 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.640 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.640 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.640 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.902 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.902 "name": "Existed_Raid", 00:10:32.902 "uuid": "0e1a5bb7-37eb-4f8a-873a-32b65fb3fac9", 00:10:32.902 "strip_size_kb": 64, 00:10:32.902 "state": "configuring", 00:10:32.902 "raid_level": "concat", 00:10:32.902 "superblock": true, 00:10:32.902 "num_base_bdevs": 4, 00:10:32.902 "num_base_bdevs_discovered": 0, 00:10:32.902 "num_base_bdevs_operational": 4, 00:10:32.902 "base_bdevs_list": [ 00:10:32.902 { 00:10:32.902 "name": "BaseBdev1", 00:10:32.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.902 "is_configured": false, 00:10:32.902 "data_offset": 0, 00:10:32.902 "data_size": 0 00:10:32.902 }, 00:10:32.902 { 00:10:32.902 "name": "BaseBdev2", 00:10:32.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.902 "is_configured": false, 00:10:32.903 "data_offset": 0, 00:10:32.903 "data_size": 0 00:10:32.903 }, 00:10:32.903 { 00:10:32.903 "name": "BaseBdev3", 00:10:32.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.903 "is_configured": false, 00:10:32.903 "data_offset": 0, 00:10:32.903 "data_size": 0 00:10:32.903 }, 00:10:32.903 { 00:10:32.903 "name": "BaseBdev4", 00:10:32.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.903 "is_configured": false, 00:10:32.903 "data_offset": 0, 00:10:32.903 "data_size": 0 00:10:32.903 } 00:10:32.903 ] 00:10:32.903 }' 00:10:32.903 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.903 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.163 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:33.163 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.163 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.163 [2024-11-18 03:10:36.607373] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:33.163 [2024-11-18 03:10:36.607464] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.164 [2024-11-18 03:10:36.615415] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:33.164 [2024-11-18 03:10:36.615504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:33.164 [2024-11-18 03:10:36.615538] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:33.164 [2024-11-18 03:10:36.615566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:33.164 [2024-11-18 03:10:36.615634] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:33.164 [2024-11-18 03:10:36.615668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:33.164 [2024-11-18 03:10:36.615700] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:33.164 [2024-11-18 03:10:36.615726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.164 [2024-11-18 03:10:36.632494] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:33.164 BaseBdev1 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.164 [ 00:10:33.164 { 00:10:33.164 "name": "BaseBdev1", 00:10:33.164 "aliases": [ 00:10:33.164 "1ecd4eef-c70f-4830-b6c9-22373103d94a" 00:10:33.164 ], 00:10:33.164 "product_name": "Malloc disk", 00:10:33.164 "block_size": 512, 00:10:33.164 "num_blocks": 65536, 00:10:33.164 "uuid": "1ecd4eef-c70f-4830-b6c9-22373103d94a", 00:10:33.164 "assigned_rate_limits": { 00:10:33.164 "rw_ios_per_sec": 0, 00:10:33.164 "rw_mbytes_per_sec": 0, 00:10:33.164 "r_mbytes_per_sec": 0, 00:10:33.164 "w_mbytes_per_sec": 0 00:10:33.164 }, 00:10:33.164 "claimed": true, 00:10:33.164 "claim_type": "exclusive_write", 00:10:33.164 "zoned": false, 00:10:33.164 "supported_io_types": { 00:10:33.164 "read": true, 00:10:33.164 "write": true, 00:10:33.164 "unmap": true, 00:10:33.164 "flush": true, 00:10:33.164 "reset": true, 00:10:33.164 "nvme_admin": false, 00:10:33.164 "nvme_io": false, 00:10:33.164 "nvme_io_md": false, 00:10:33.164 "write_zeroes": true, 00:10:33.164 "zcopy": true, 00:10:33.164 "get_zone_info": false, 00:10:33.164 "zone_management": false, 00:10:33.164 "zone_append": false, 00:10:33.164 "compare": false, 00:10:33.164 "compare_and_write": false, 00:10:33.164 "abort": true, 00:10:33.164 "seek_hole": false, 00:10:33.164 "seek_data": false, 00:10:33.164 "copy": true, 00:10:33.164 "nvme_iov_md": false 00:10:33.164 }, 00:10:33.164 "memory_domains": [ 00:10:33.164 { 00:10:33.164 "dma_device_id": "system", 00:10:33.164 "dma_device_type": 1 00:10:33.164 }, 00:10:33.164 { 00:10:33.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.164 "dma_device_type": 2 00:10:33.164 } 00:10:33.164 ], 00:10:33.164 "driver_specific": {} 00:10:33.164 } 00:10:33.164 ] 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.164 "name": "Existed_Raid", 00:10:33.164 "uuid": "3269f5bf-6e90-4c91-b46a-6bfc4de4c3c2", 00:10:33.164 "strip_size_kb": 64, 00:10:33.164 "state": "configuring", 00:10:33.164 "raid_level": "concat", 00:10:33.164 "superblock": true, 00:10:33.164 "num_base_bdevs": 4, 00:10:33.164 "num_base_bdevs_discovered": 1, 00:10:33.164 "num_base_bdevs_operational": 4, 00:10:33.164 "base_bdevs_list": [ 00:10:33.164 { 00:10:33.164 "name": "BaseBdev1", 00:10:33.164 "uuid": "1ecd4eef-c70f-4830-b6c9-22373103d94a", 00:10:33.164 "is_configured": true, 00:10:33.164 "data_offset": 2048, 00:10:33.164 "data_size": 63488 00:10:33.164 }, 00:10:33.164 { 00:10:33.164 "name": "BaseBdev2", 00:10:33.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.164 "is_configured": false, 00:10:33.164 "data_offset": 0, 00:10:33.164 "data_size": 0 00:10:33.164 }, 00:10:33.164 { 00:10:33.164 "name": "BaseBdev3", 00:10:33.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.164 "is_configured": false, 00:10:33.164 "data_offset": 0, 00:10:33.164 "data_size": 0 00:10:33.164 }, 00:10:33.164 { 00:10:33.164 "name": "BaseBdev4", 00:10:33.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.164 "is_configured": false, 00:10:33.164 "data_offset": 0, 00:10:33.164 "data_size": 0 00:10:33.164 } 00:10:33.164 ] 00:10:33.164 }' 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.164 03:10:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.735 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:33.735 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.735 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.735 [2024-11-18 03:10:37.135709] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:33.735 [2024-11-18 03:10:37.135821] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:33.735 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.735 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:33.735 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.735 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.735 [2024-11-18 03:10:37.147729] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:33.735 [2024-11-18 03:10:37.149701] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:33.735 [2024-11-18 03:10:37.149782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:33.735 [2024-11-18 03:10:37.149826] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:33.735 [2024-11-18 03:10:37.149849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:33.735 [2024-11-18 03:10:37.149868] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:33.735 [2024-11-18 03:10:37.149888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:33.735 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.735 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:33.735 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:33.735 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:33.735 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.735 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.735 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.735 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.735 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.735 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.735 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.735 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.735 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.735 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.735 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.735 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.735 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.735 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.735 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.735 "name": "Existed_Raid", 00:10:33.735 "uuid": "f2bbf473-ee1c-4a55-9663-b9f1fc33a35a", 00:10:33.735 "strip_size_kb": 64, 00:10:33.735 "state": "configuring", 00:10:33.735 "raid_level": "concat", 00:10:33.735 "superblock": true, 00:10:33.735 "num_base_bdevs": 4, 00:10:33.735 "num_base_bdevs_discovered": 1, 00:10:33.735 "num_base_bdevs_operational": 4, 00:10:33.735 "base_bdevs_list": [ 00:10:33.735 { 00:10:33.735 "name": "BaseBdev1", 00:10:33.735 "uuid": "1ecd4eef-c70f-4830-b6c9-22373103d94a", 00:10:33.735 "is_configured": true, 00:10:33.735 "data_offset": 2048, 00:10:33.735 "data_size": 63488 00:10:33.735 }, 00:10:33.735 { 00:10:33.735 "name": "BaseBdev2", 00:10:33.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.735 "is_configured": false, 00:10:33.735 "data_offset": 0, 00:10:33.735 "data_size": 0 00:10:33.735 }, 00:10:33.735 { 00:10:33.735 "name": "BaseBdev3", 00:10:33.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.736 "is_configured": false, 00:10:33.736 "data_offset": 0, 00:10:33.736 "data_size": 0 00:10:33.736 }, 00:10:33.736 { 00:10:33.736 "name": "BaseBdev4", 00:10:33.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.736 "is_configured": false, 00:10:33.736 "data_offset": 0, 00:10:33.736 "data_size": 0 00:10:33.736 } 00:10:33.736 ] 00:10:33.736 }' 00:10:33.736 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.736 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.306 [2024-11-18 03:10:37.608316] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:34.306 BaseBdev2 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.306 [ 00:10:34.306 { 00:10:34.306 "name": "BaseBdev2", 00:10:34.306 "aliases": [ 00:10:34.306 "1d3ad49a-21ed-4918-9280-07eb95f15f80" 00:10:34.306 ], 00:10:34.306 "product_name": "Malloc disk", 00:10:34.306 "block_size": 512, 00:10:34.306 "num_blocks": 65536, 00:10:34.306 "uuid": "1d3ad49a-21ed-4918-9280-07eb95f15f80", 00:10:34.306 "assigned_rate_limits": { 00:10:34.306 "rw_ios_per_sec": 0, 00:10:34.306 "rw_mbytes_per_sec": 0, 00:10:34.306 "r_mbytes_per_sec": 0, 00:10:34.306 "w_mbytes_per_sec": 0 00:10:34.306 }, 00:10:34.306 "claimed": true, 00:10:34.306 "claim_type": "exclusive_write", 00:10:34.306 "zoned": false, 00:10:34.306 "supported_io_types": { 00:10:34.306 "read": true, 00:10:34.306 "write": true, 00:10:34.306 "unmap": true, 00:10:34.306 "flush": true, 00:10:34.306 "reset": true, 00:10:34.306 "nvme_admin": false, 00:10:34.306 "nvme_io": false, 00:10:34.306 "nvme_io_md": false, 00:10:34.306 "write_zeroes": true, 00:10:34.306 "zcopy": true, 00:10:34.306 "get_zone_info": false, 00:10:34.306 "zone_management": false, 00:10:34.306 "zone_append": false, 00:10:34.306 "compare": false, 00:10:34.306 "compare_and_write": false, 00:10:34.306 "abort": true, 00:10:34.306 "seek_hole": false, 00:10:34.306 "seek_data": false, 00:10:34.306 "copy": true, 00:10:34.306 "nvme_iov_md": false 00:10:34.306 }, 00:10:34.306 "memory_domains": [ 00:10:34.306 { 00:10:34.306 "dma_device_id": "system", 00:10:34.306 "dma_device_type": 1 00:10:34.306 }, 00:10:34.306 { 00:10:34.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.306 "dma_device_type": 2 00:10:34.306 } 00:10:34.306 ], 00:10:34.306 "driver_specific": {} 00:10:34.306 } 00:10:34.306 ] 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.306 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.306 "name": "Existed_Raid", 00:10:34.306 "uuid": "f2bbf473-ee1c-4a55-9663-b9f1fc33a35a", 00:10:34.306 "strip_size_kb": 64, 00:10:34.306 "state": "configuring", 00:10:34.306 "raid_level": "concat", 00:10:34.306 "superblock": true, 00:10:34.306 "num_base_bdevs": 4, 00:10:34.306 "num_base_bdevs_discovered": 2, 00:10:34.306 "num_base_bdevs_operational": 4, 00:10:34.306 "base_bdevs_list": [ 00:10:34.306 { 00:10:34.307 "name": "BaseBdev1", 00:10:34.307 "uuid": "1ecd4eef-c70f-4830-b6c9-22373103d94a", 00:10:34.307 "is_configured": true, 00:10:34.307 "data_offset": 2048, 00:10:34.307 "data_size": 63488 00:10:34.307 }, 00:10:34.307 { 00:10:34.307 "name": "BaseBdev2", 00:10:34.307 "uuid": "1d3ad49a-21ed-4918-9280-07eb95f15f80", 00:10:34.307 "is_configured": true, 00:10:34.307 "data_offset": 2048, 00:10:34.307 "data_size": 63488 00:10:34.307 }, 00:10:34.307 { 00:10:34.307 "name": "BaseBdev3", 00:10:34.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.307 "is_configured": false, 00:10:34.307 "data_offset": 0, 00:10:34.307 "data_size": 0 00:10:34.307 }, 00:10:34.307 { 00:10:34.307 "name": "BaseBdev4", 00:10:34.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.307 "is_configured": false, 00:10:34.307 "data_offset": 0, 00:10:34.307 "data_size": 0 00:10:34.307 } 00:10:34.307 ] 00:10:34.307 }' 00:10:34.307 03:10:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.307 03:10:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.567 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:34.567 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.567 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.567 [2024-11-18 03:10:38.110686] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:34.567 BaseBdev3 00:10:34.567 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.567 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:34.567 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:34.567 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:34.567 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:34.567 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:34.567 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:34.567 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:34.567 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.567 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.567 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.567 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:34.567 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.567 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.567 [ 00:10:34.567 { 00:10:34.567 "name": "BaseBdev3", 00:10:34.567 "aliases": [ 00:10:34.567 "e84b24a8-b547-4078-9621-7eeeed3d89ee" 00:10:34.567 ], 00:10:34.567 "product_name": "Malloc disk", 00:10:34.567 "block_size": 512, 00:10:34.567 "num_blocks": 65536, 00:10:34.567 "uuid": "e84b24a8-b547-4078-9621-7eeeed3d89ee", 00:10:34.567 "assigned_rate_limits": { 00:10:34.567 "rw_ios_per_sec": 0, 00:10:34.567 "rw_mbytes_per_sec": 0, 00:10:34.567 "r_mbytes_per_sec": 0, 00:10:34.567 "w_mbytes_per_sec": 0 00:10:34.567 }, 00:10:34.567 "claimed": true, 00:10:34.567 "claim_type": "exclusive_write", 00:10:34.567 "zoned": false, 00:10:34.567 "supported_io_types": { 00:10:34.567 "read": true, 00:10:34.567 "write": true, 00:10:34.567 "unmap": true, 00:10:34.567 "flush": true, 00:10:34.567 "reset": true, 00:10:34.827 "nvme_admin": false, 00:10:34.827 "nvme_io": false, 00:10:34.827 "nvme_io_md": false, 00:10:34.827 "write_zeroes": true, 00:10:34.827 "zcopy": true, 00:10:34.827 "get_zone_info": false, 00:10:34.827 "zone_management": false, 00:10:34.827 "zone_append": false, 00:10:34.827 "compare": false, 00:10:34.827 "compare_and_write": false, 00:10:34.827 "abort": true, 00:10:34.827 "seek_hole": false, 00:10:34.827 "seek_data": false, 00:10:34.827 "copy": true, 00:10:34.827 "nvme_iov_md": false 00:10:34.827 }, 00:10:34.827 "memory_domains": [ 00:10:34.827 { 00:10:34.827 "dma_device_id": "system", 00:10:34.827 "dma_device_type": 1 00:10:34.827 }, 00:10:34.827 { 00:10:34.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.827 "dma_device_type": 2 00:10:34.827 } 00:10:34.827 ], 00:10:34.827 "driver_specific": {} 00:10:34.827 } 00:10:34.827 ] 00:10:34.827 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.827 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:34.827 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:34.827 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:34.827 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:34.827 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.827 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.827 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:34.827 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.827 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.827 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.827 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.827 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.827 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.827 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.827 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.827 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.828 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.828 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.828 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.828 "name": "Existed_Raid", 00:10:34.828 "uuid": "f2bbf473-ee1c-4a55-9663-b9f1fc33a35a", 00:10:34.828 "strip_size_kb": 64, 00:10:34.828 "state": "configuring", 00:10:34.828 "raid_level": "concat", 00:10:34.828 "superblock": true, 00:10:34.828 "num_base_bdevs": 4, 00:10:34.828 "num_base_bdevs_discovered": 3, 00:10:34.828 "num_base_bdevs_operational": 4, 00:10:34.828 "base_bdevs_list": [ 00:10:34.828 { 00:10:34.828 "name": "BaseBdev1", 00:10:34.828 "uuid": "1ecd4eef-c70f-4830-b6c9-22373103d94a", 00:10:34.828 "is_configured": true, 00:10:34.828 "data_offset": 2048, 00:10:34.828 "data_size": 63488 00:10:34.828 }, 00:10:34.828 { 00:10:34.828 "name": "BaseBdev2", 00:10:34.828 "uuid": "1d3ad49a-21ed-4918-9280-07eb95f15f80", 00:10:34.828 "is_configured": true, 00:10:34.828 "data_offset": 2048, 00:10:34.828 "data_size": 63488 00:10:34.828 }, 00:10:34.828 { 00:10:34.828 "name": "BaseBdev3", 00:10:34.828 "uuid": "e84b24a8-b547-4078-9621-7eeeed3d89ee", 00:10:34.828 "is_configured": true, 00:10:34.828 "data_offset": 2048, 00:10:34.828 "data_size": 63488 00:10:34.828 }, 00:10:34.828 { 00:10:34.828 "name": "BaseBdev4", 00:10:34.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.828 "is_configured": false, 00:10:34.828 "data_offset": 0, 00:10:34.828 "data_size": 0 00:10:34.828 } 00:10:34.828 ] 00:10:34.828 }' 00:10:34.828 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.828 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.087 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:35.087 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.087 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.087 [2024-11-18 03:10:38.633048] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:35.087 BaseBdev4 00:10:35.087 [2024-11-18 03:10:38.633366] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:35.087 [2024-11-18 03:10:38.633409] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:35.087 [2024-11-18 03:10:38.633705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:35.087 [2024-11-18 03:10:38.633831] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:35.087 [2024-11-18 03:10:38.633848] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:35.087 [2024-11-18 03:10:38.634000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.087 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.087 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:35.087 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:35.087 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:35.087 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:35.087 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:35.087 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:35.087 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:35.087 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.087 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.087 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.087 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:35.087 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.087 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.087 [ 00:10:35.087 { 00:10:35.087 "name": "BaseBdev4", 00:10:35.087 "aliases": [ 00:10:35.087 "0c36b780-f5fa-4de6-91ec-a1f276f95197" 00:10:35.087 ], 00:10:35.087 "product_name": "Malloc disk", 00:10:35.087 "block_size": 512, 00:10:35.087 "num_blocks": 65536, 00:10:35.088 "uuid": "0c36b780-f5fa-4de6-91ec-a1f276f95197", 00:10:35.088 "assigned_rate_limits": { 00:10:35.088 "rw_ios_per_sec": 0, 00:10:35.088 "rw_mbytes_per_sec": 0, 00:10:35.088 "r_mbytes_per_sec": 0, 00:10:35.088 "w_mbytes_per_sec": 0 00:10:35.088 }, 00:10:35.088 "claimed": true, 00:10:35.088 "claim_type": "exclusive_write", 00:10:35.088 "zoned": false, 00:10:35.347 "supported_io_types": { 00:10:35.347 "read": true, 00:10:35.347 "write": true, 00:10:35.347 "unmap": true, 00:10:35.347 "flush": true, 00:10:35.347 "reset": true, 00:10:35.347 "nvme_admin": false, 00:10:35.347 "nvme_io": false, 00:10:35.347 "nvme_io_md": false, 00:10:35.347 "write_zeroes": true, 00:10:35.347 "zcopy": true, 00:10:35.347 "get_zone_info": false, 00:10:35.347 "zone_management": false, 00:10:35.347 "zone_append": false, 00:10:35.347 "compare": false, 00:10:35.347 "compare_and_write": false, 00:10:35.347 "abort": true, 00:10:35.347 "seek_hole": false, 00:10:35.347 "seek_data": false, 00:10:35.347 "copy": true, 00:10:35.347 "nvme_iov_md": false 00:10:35.347 }, 00:10:35.347 "memory_domains": [ 00:10:35.347 { 00:10:35.347 "dma_device_id": "system", 00:10:35.347 "dma_device_type": 1 00:10:35.347 }, 00:10:35.347 { 00:10:35.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.347 "dma_device_type": 2 00:10:35.347 } 00:10:35.347 ], 00:10:35.347 "driver_specific": {} 00:10:35.347 } 00:10:35.347 ] 00:10:35.347 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.347 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:35.347 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:35.347 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:35.347 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:35.347 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.347 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:35.347 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:35.347 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.347 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.347 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.347 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.347 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.347 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.347 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.347 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.347 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.347 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.347 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.347 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.347 "name": "Existed_Raid", 00:10:35.347 "uuid": "f2bbf473-ee1c-4a55-9663-b9f1fc33a35a", 00:10:35.347 "strip_size_kb": 64, 00:10:35.347 "state": "online", 00:10:35.347 "raid_level": "concat", 00:10:35.347 "superblock": true, 00:10:35.347 "num_base_bdevs": 4, 00:10:35.347 "num_base_bdevs_discovered": 4, 00:10:35.347 "num_base_bdevs_operational": 4, 00:10:35.347 "base_bdevs_list": [ 00:10:35.347 { 00:10:35.347 "name": "BaseBdev1", 00:10:35.347 "uuid": "1ecd4eef-c70f-4830-b6c9-22373103d94a", 00:10:35.347 "is_configured": true, 00:10:35.347 "data_offset": 2048, 00:10:35.347 "data_size": 63488 00:10:35.347 }, 00:10:35.347 { 00:10:35.347 "name": "BaseBdev2", 00:10:35.347 "uuid": "1d3ad49a-21ed-4918-9280-07eb95f15f80", 00:10:35.347 "is_configured": true, 00:10:35.347 "data_offset": 2048, 00:10:35.347 "data_size": 63488 00:10:35.347 }, 00:10:35.347 { 00:10:35.347 "name": "BaseBdev3", 00:10:35.347 "uuid": "e84b24a8-b547-4078-9621-7eeeed3d89ee", 00:10:35.347 "is_configured": true, 00:10:35.347 "data_offset": 2048, 00:10:35.347 "data_size": 63488 00:10:35.347 }, 00:10:35.347 { 00:10:35.347 "name": "BaseBdev4", 00:10:35.347 "uuid": "0c36b780-f5fa-4de6-91ec-a1f276f95197", 00:10:35.347 "is_configured": true, 00:10:35.347 "data_offset": 2048, 00:10:35.347 "data_size": 63488 00:10:35.347 } 00:10:35.347 ] 00:10:35.347 }' 00:10:35.347 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.347 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.607 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:35.607 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:35.607 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:35.607 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:35.607 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:35.607 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:35.607 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:35.607 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:35.607 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.607 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.607 [2024-11-18 03:10:39.116634] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:35.607 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.607 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:35.607 "name": "Existed_Raid", 00:10:35.607 "aliases": [ 00:10:35.607 "f2bbf473-ee1c-4a55-9663-b9f1fc33a35a" 00:10:35.607 ], 00:10:35.607 "product_name": "Raid Volume", 00:10:35.607 "block_size": 512, 00:10:35.607 "num_blocks": 253952, 00:10:35.607 "uuid": "f2bbf473-ee1c-4a55-9663-b9f1fc33a35a", 00:10:35.607 "assigned_rate_limits": { 00:10:35.607 "rw_ios_per_sec": 0, 00:10:35.607 "rw_mbytes_per_sec": 0, 00:10:35.607 "r_mbytes_per_sec": 0, 00:10:35.607 "w_mbytes_per_sec": 0 00:10:35.607 }, 00:10:35.607 "claimed": false, 00:10:35.607 "zoned": false, 00:10:35.607 "supported_io_types": { 00:10:35.607 "read": true, 00:10:35.607 "write": true, 00:10:35.607 "unmap": true, 00:10:35.607 "flush": true, 00:10:35.607 "reset": true, 00:10:35.607 "nvme_admin": false, 00:10:35.607 "nvme_io": false, 00:10:35.607 "nvme_io_md": false, 00:10:35.607 "write_zeroes": true, 00:10:35.607 "zcopy": false, 00:10:35.607 "get_zone_info": false, 00:10:35.607 "zone_management": false, 00:10:35.607 "zone_append": false, 00:10:35.607 "compare": false, 00:10:35.607 "compare_and_write": false, 00:10:35.607 "abort": false, 00:10:35.607 "seek_hole": false, 00:10:35.607 "seek_data": false, 00:10:35.607 "copy": false, 00:10:35.607 "nvme_iov_md": false 00:10:35.607 }, 00:10:35.607 "memory_domains": [ 00:10:35.607 { 00:10:35.607 "dma_device_id": "system", 00:10:35.607 "dma_device_type": 1 00:10:35.607 }, 00:10:35.607 { 00:10:35.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.607 "dma_device_type": 2 00:10:35.607 }, 00:10:35.607 { 00:10:35.607 "dma_device_id": "system", 00:10:35.608 "dma_device_type": 1 00:10:35.608 }, 00:10:35.608 { 00:10:35.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.608 "dma_device_type": 2 00:10:35.608 }, 00:10:35.608 { 00:10:35.608 "dma_device_id": "system", 00:10:35.608 "dma_device_type": 1 00:10:35.608 }, 00:10:35.608 { 00:10:35.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.608 "dma_device_type": 2 00:10:35.608 }, 00:10:35.608 { 00:10:35.608 "dma_device_id": "system", 00:10:35.608 "dma_device_type": 1 00:10:35.608 }, 00:10:35.608 { 00:10:35.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.608 "dma_device_type": 2 00:10:35.608 } 00:10:35.608 ], 00:10:35.608 "driver_specific": { 00:10:35.608 "raid": { 00:10:35.608 "uuid": "f2bbf473-ee1c-4a55-9663-b9f1fc33a35a", 00:10:35.608 "strip_size_kb": 64, 00:10:35.608 "state": "online", 00:10:35.608 "raid_level": "concat", 00:10:35.608 "superblock": true, 00:10:35.608 "num_base_bdevs": 4, 00:10:35.608 "num_base_bdevs_discovered": 4, 00:10:35.608 "num_base_bdevs_operational": 4, 00:10:35.608 "base_bdevs_list": [ 00:10:35.608 { 00:10:35.608 "name": "BaseBdev1", 00:10:35.608 "uuid": "1ecd4eef-c70f-4830-b6c9-22373103d94a", 00:10:35.608 "is_configured": true, 00:10:35.608 "data_offset": 2048, 00:10:35.608 "data_size": 63488 00:10:35.608 }, 00:10:35.608 { 00:10:35.608 "name": "BaseBdev2", 00:10:35.608 "uuid": "1d3ad49a-21ed-4918-9280-07eb95f15f80", 00:10:35.608 "is_configured": true, 00:10:35.608 "data_offset": 2048, 00:10:35.608 "data_size": 63488 00:10:35.608 }, 00:10:35.608 { 00:10:35.608 "name": "BaseBdev3", 00:10:35.608 "uuid": "e84b24a8-b547-4078-9621-7eeeed3d89ee", 00:10:35.608 "is_configured": true, 00:10:35.608 "data_offset": 2048, 00:10:35.608 "data_size": 63488 00:10:35.608 }, 00:10:35.608 { 00:10:35.608 "name": "BaseBdev4", 00:10:35.608 "uuid": "0c36b780-f5fa-4de6-91ec-a1f276f95197", 00:10:35.608 "is_configured": true, 00:10:35.608 "data_offset": 2048, 00:10:35.608 "data_size": 63488 00:10:35.608 } 00:10:35.608 ] 00:10:35.608 } 00:10:35.608 } 00:10:35.608 }' 00:10:35.608 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:35.868 BaseBdev2 00:10:35.868 BaseBdev3 00:10:35.868 BaseBdev4' 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.868 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.868 [2024-11-18 03:10:39.435822] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:35.868 [2024-11-18 03:10:39.435909] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:35.868 [2024-11-18 03:10:39.435991] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:36.128 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.128 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:36.128 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:36.128 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:36.128 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:36.128 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:36.128 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:36.128 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.128 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:36.128 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.128 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.128 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.128 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.128 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.128 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.128 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.128 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.128 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.128 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.128 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.128 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.128 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.128 "name": "Existed_Raid", 00:10:36.128 "uuid": "f2bbf473-ee1c-4a55-9663-b9f1fc33a35a", 00:10:36.128 "strip_size_kb": 64, 00:10:36.128 "state": "offline", 00:10:36.128 "raid_level": "concat", 00:10:36.128 "superblock": true, 00:10:36.128 "num_base_bdevs": 4, 00:10:36.128 "num_base_bdevs_discovered": 3, 00:10:36.128 "num_base_bdevs_operational": 3, 00:10:36.128 "base_bdevs_list": [ 00:10:36.128 { 00:10:36.128 "name": null, 00:10:36.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.128 "is_configured": false, 00:10:36.128 "data_offset": 0, 00:10:36.128 "data_size": 63488 00:10:36.128 }, 00:10:36.128 { 00:10:36.128 "name": "BaseBdev2", 00:10:36.128 "uuid": "1d3ad49a-21ed-4918-9280-07eb95f15f80", 00:10:36.128 "is_configured": true, 00:10:36.128 "data_offset": 2048, 00:10:36.128 "data_size": 63488 00:10:36.128 }, 00:10:36.128 { 00:10:36.128 "name": "BaseBdev3", 00:10:36.128 "uuid": "e84b24a8-b547-4078-9621-7eeeed3d89ee", 00:10:36.128 "is_configured": true, 00:10:36.128 "data_offset": 2048, 00:10:36.128 "data_size": 63488 00:10:36.128 }, 00:10:36.128 { 00:10:36.128 "name": "BaseBdev4", 00:10:36.128 "uuid": "0c36b780-f5fa-4de6-91ec-a1f276f95197", 00:10:36.128 "is_configured": true, 00:10:36.128 "data_offset": 2048, 00:10:36.128 "data_size": 63488 00:10:36.128 } 00:10:36.128 ] 00:10:36.128 }' 00:10:36.128 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.128 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.389 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:36.389 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:36.389 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:36.389 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.389 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.389 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.389 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.389 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:36.389 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:36.389 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:36.389 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.389 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.389 [2024-11-18 03:10:39.942602] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:36.389 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.389 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:36.389 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:36.389 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.389 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:36.389 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.389 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.649 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.649 [2024-11-18 03:10:40.014322] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.649 [2024-11-18 03:10:40.082003] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:36.649 [2024-11-18 03:10:40.082112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.649 BaseBdev2 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.649 [ 00:10:36.649 { 00:10:36.649 "name": "BaseBdev2", 00:10:36.649 "aliases": [ 00:10:36.649 "5197b551-5498-49dc-81fd-d6d768438387" 00:10:36.649 ], 00:10:36.649 "product_name": "Malloc disk", 00:10:36.649 "block_size": 512, 00:10:36.649 "num_blocks": 65536, 00:10:36.649 "uuid": "5197b551-5498-49dc-81fd-d6d768438387", 00:10:36.649 "assigned_rate_limits": { 00:10:36.649 "rw_ios_per_sec": 0, 00:10:36.649 "rw_mbytes_per_sec": 0, 00:10:36.649 "r_mbytes_per_sec": 0, 00:10:36.649 "w_mbytes_per_sec": 0 00:10:36.649 }, 00:10:36.649 "claimed": false, 00:10:36.649 "zoned": false, 00:10:36.649 "supported_io_types": { 00:10:36.649 "read": true, 00:10:36.649 "write": true, 00:10:36.649 "unmap": true, 00:10:36.649 "flush": true, 00:10:36.649 "reset": true, 00:10:36.649 "nvme_admin": false, 00:10:36.649 "nvme_io": false, 00:10:36.649 "nvme_io_md": false, 00:10:36.649 "write_zeroes": true, 00:10:36.649 "zcopy": true, 00:10:36.649 "get_zone_info": false, 00:10:36.649 "zone_management": false, 00:10:36.649 "zone_append": false, 00:10:36.649 "compare": false, 00:10:36.649 "compare_and_write": false, 00:10:36.649 "abort": true, 00:10:36.649 "seek_hole": false, 00:10:36.649 "seek_data": false, 00:10:36.649 "copy": true, 00:10:36.649 "nvme_iov_md": false 00:10:36.649 }, 00:10:36.649 "memory_domains": [ 00:10:36.649 { 00:10:36.649 "dma_device_id": "system", 00:10:36.649 "dma_device_type": 1 00:10:36.649 }, 00:10:36.649 { 00:10:36.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.649 "dma_device_type": 2 00:10:36.649 } 00:10:36.649 ], 00:10:36.649 "driver_specific": {} 00:10:36.649 } 00:10:36.649 ] 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.649 BaseBdev3 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.649 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.908 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.908 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:36.908 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.908 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.908 [ 00:10:36.908 { 00:10:36.908 "name": "BaseBdev3", 00:10:36.908 "aliases": [ 00:10:36.908 "75aff538-82a4-468c-8318-7defc037cb79" 00:10:36.908 ], 00:10:36.908 "product_name": "Malloc disk", 00:10:36.908 "block_size": 512, 00:10:36.908 "num_blocks": 65536, 00:10:36.908 "uuid": "75aff538-82a4-468c-8318-7defc037cb79", 00:10:36.908 "assigned_rate_limits": { 00:10:36.908 "rw_ios_per_sec": 0, 00:10:36.908 "rw_mbytes_per_sec": 0, 00:10:36.908 "r_mbytes_per_sec": 0, 00:10:36.908 "w_mbytes_per_sec": 0 00:10:36.908 }, 00:10:36.908 "claimed": false, 00:10:36.908 "zoned": false, 00:10:36.908 "supported_io_types": { 00:10:36.908 "read": true, 00:10:36.908 "write": true, 00:10:36.908 "unmap": true, 00:10:36.908 "flush": true, 00:10:36.908 "reset": true, 00:10:36.908 "nvme_admin": false, 00:10:36.908 "nvme_io": false, 00:10:36.908 "nvme_io_md": false, 00:10:36.908 "write_zeroes": true, 00:10:36.908 "zcopy": true, 00:10:36.908 "get_zone_info": false, 00:10:36.908 "zone_management": false, 00:10:36.908 "zone_append": false, 00:10:36.908 "compare": false, 00:10:36.908 "compare_and_write": false, 00:10:36.908 "abort": true, 00:10:36.908 "seek_hole": false, 00:10:36.908 "seek_data": false, 00:10:36.908 "copy": true, 00:10:36.908 "nvme_iov_md": false 00:10:36.908 }, 00:10:36.908 "memory_domains": [ 00:10:36.908 { 00:10:36.908 "dma_device_id": "system", 00:10:36.908 "dma_device_type": 1 00:10:36.908 }, 00:10:36.908 { 00:10:36.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.908 "dma_device_type": 2 00:10:36.908 } 00:10:36.908 ], 00:10:36.908 "driver_specific": {} 00:10:36.908 } 00:10:36.908 ] 00:10:36.908 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.908 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:36.908 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:36.908 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:36.908 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.909 BaseBdev4 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.909 [ 00:10:36.909 { 00:10:36.909 "name": "BaseBdev4", 00:10:36.909 "aliases": [ 00:10:36.909 "d83a4504-22fa-4d9f-afb2-27298bc0e85a" 00:10:36.909 ], 00:10:36.909 "product_name": "Malloc disk", 00:10:36.909 "block_size": 512, 00:10:36.909 "num_blocks": 65536, 00:10:36.909 "uuid": "d83a4504-22fa-4d9f-afb2-27298bc0e85a", 00:10:36.909 "assigned_rate_limits": { 00:10:36.909 "rw_ios_per_sec": 0, 00:10:36.909 "rw_mbytes_per_sec": 0, 00:10:36.909 "r_mbytes_per_sec": 0, 00:10:36.909 "w_mbytes_per_sec": 0 00:10:36.909 }, 00:10:36.909 "claimed": false, 00:10:36.909 "zoned": false, 00:10:36.909 "supported_io_types": { 00:10:36.909 "read": true, 00:10:36.909 "write": true, 00:10:36.909 "unmap": true, 00:10:36.909 "flush": true, 00:10:36.909 "reset": true, 00:10:36.909 "nvme_admin": false, 00:10:36.909 "nvme_io": false, 00:10:36.909 "nvme_io_md": false, 00:10:36.909 "write_zeroes": true, 00:10:36.909 "zcopy": true, 00:10:36.909 "get_zone_info": false, 00:10:36.909 "zone_management": false, 00:10:36.909 "zone_append": false, 00:10:36.909 "compare": false, 00:10:36.909 "compare_and_write": false, 00:10:36.909 "abort": true, 00:10:36.909 "seek_hole": false, 00:10:36.909 "seek_data": false, 00:10:36.909 "copy": true, 00:10:36.909 "nvme_iov_md": false 00:10:36.909 }, 00:10:36.909 "memory_domains": [ 00:10:36.909 { 00:10:36.909 "dma_device_id": "system", 00:10:36.909 "dma_device_type": 1 00:10:36.909 }, 00:10:36.909 { 00:10:36.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.909 "dma_device_type": 2 00:10:36.909 } 00:10:36.909 ], 00:10:36.909 "driver_specific": {} 00:10:36.909 } 00:10:36.909 ] 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.909 [2024-11-18 03:10:40.313105] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:36.909 [2024-11-18 03:10:40.313209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:36.909 [2024-11-18 03:10:40.313273] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:36.909 [2024-11-18 03:10:40.315391] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:36.909 [2024-11-18 03:10:40.315491] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.909 "name": "Existed_Raid", 00:10:36.909 "uuid": "ce63d1ea-c123-4175-aea9-2a1e35098ab6", 00:10:36.909 "strip_size_kb": 64, 00:10:36.909 "state": "configuring", 00:10:36.909 "raid_level": "concat", 00:10:36.909 "superblock": true, 00:10:36.909 "num_base_bdevs": 4, 00:10:36.909 "num_base_bdevs_discovered": 3, 00:10:36.909 "num_base_bdevs_operational": 4, 00:10:36.909 "base_bdevs_list": [ 00:10:36.909 { 00:10:36.909 "name": "BaseBdev1", 00:10:36.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.909 "is_configured": false, 00:10:36.909 "data_offset": 0, 00:10:36.909 "data_size": 0 00:10:36.909 }, 00:10:36.909 { 00:10:36.909 "name": "BaseBdev2", 00:10:36.909 "uuid": "5197b551-5498-49dc-81fd-d6d768438387", 00:10:36.909 "is_configured": true, 00:10:36.909 "data_offset": 2048, 00:10:36.909 "data_size": 63488 00:10:36.909 }, 00:10:36.909 { 00:10:36.909 "name": "BaseBdev3", 00:10:36.909 "uuid": "75aff538-82a4-468c-8318-7defc037cb79", 00:10:36.909 "is_configured": true, 00:10:36.909 "data_offset": 2048, 00:10:36.909 "data_size": 63488 00:10:36.909 }, 00:10:36.909 { 00:10:36.909 "name": "BaseBdev4", 00:10:36.909 "uuid": "d83a4504-22fa-4d9f-afb2-27298bc0e85a", 00:10:36.909 "is_configured": true, 00:10:36.909 "data_offset": 2048, 00:10:36.909 "data_size": 63488 00:10:36.909 } 00:10:36.909 ] 00:10:36.909 }' 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.909 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.479 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:37.479 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.479 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.479 [2024-11-18 03:10:40.768288] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:37.479 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.479 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:37.479 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.479 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.479 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.479 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.479 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.479 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.479 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.479 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.479 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.479 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.479 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.479 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.479 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.479 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.479 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.479 "name": "Existed_Raid", 00:10:37.479 "uuid": "ce63d1ea-c123-4175-aea9-2a1e35098ab6", 00:10:37.479 "strip_size_kb": 64, 00:10:37.479 "state": "configuring", 00:10:37.479 "raid_level": "concat", 00:10:37.479 "superblock": true, 00:10:37.479 "num_base_bdevs": 4, 00:10:37.479 "num_base_bdevs_discovered": 2, 00:10:37.479 "num_base_bdevs_operational": 4, 00:10:37.479 "base_bdevs_list": [ 00:10:37.479 { 00:10:37.479 "name": "BaseBdev1", 00:10:37.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.479 "is_configured": false, 00:10:37.479 "data_offset": 0, 00:10:37.479 "data_size": 0 00:10:37.479 }, 00:10:37.479 { 00:10:37.479 "name": null, 00:10:37.479 "uuid": "5197b551-5498-49dc-81fd-d6d768438387", 00:10:37.479 "is_configured": false, 00:10:37.479 "data_offset": 0, 00:10:37.479 "data_size": 63488 00:10:37.479 }, 00:10:37.479 { 00:10:37.479 "name": "BaseBdev3", 00:10:37.479 "uuid": "75aff538-82a4-468c-8318-7defc037cb79", 00:10:37.479 "is_configured": true, 00:10:37.479 "data_offset": 2048, 00:10:37.479 "data_size": 63488 00:10:37.479 }, 00:10:37.479 { 00:10:37.479 "name": "BaseBdev4", 00:10:37.479 "uuid": "d83a4504-22fa-4d9f-afb2-27298bc0e85a", 00:10:37.479 "is_configured": true, 00:10:37.479 "data_offset": 2048, 00:10:37.479 "data_size": 63488 00:10:37.479 } 00:10:37.479 ] 00:10:37.479 }' 00:10:37.479 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.479 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.739 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.739 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:37.739 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.739 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.739 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.739 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:37.739 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:37.739 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.739 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.739 [2024-11-18 03:10:41.310576] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:37.739 BaseBdev1 00:10:37.739 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.739 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:37.739 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:37.739 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:37.999 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:37.999 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:37.999 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:37.999 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:37.999 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.999 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.999 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.999 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:37.999 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.999 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.999 [ 00:10:37.999 { 00:10:37.999 "name": "BaseBdev1", 00:10:37.999 "aliases": [ 00:10:37.999 "fff2a498-9e1a-4485-b7eb-476c2bcef2e7" 00:10:37.999 ], 00:10:37.999 "product_name": "Malloc disk", 00:10:37.999 "block_size": 512, 00:10:37.999 "num_blocks": 65536, 00:10:37.999 "uuid": "fff2a498-9e1a-4485-b7eb-476c2bcef2e7", 00:10:37.999 "assigned_rate_limits": { 00:10:37.999 "rw_ios_per_sec": 0, 00:10:37.999 "rw_mbytes_per_sec": 0, 00:10:37.999 "r_mbytes_per_sec": 0, 00:10:37.999 "w_mbytes_per_sec": 0 00:10:37.999 }, 00:10:37.999 "claimed": true, 00:10:37.999 "claim_type": "exclusive_write", 00:10:37.999 "zoned": false, 00:10:37.999 "supported_io_types": { 00:10:37.999 "read": true, 00:10:37.999 "write": true, 00:10:37.999 "unmap": true, 00:10:37.999 "flush": true, 00:10:37.999 "reset": true, 00:10:37.999 "nvme_admin": false, 00:10:37.999 "nvme_io": false, 00:10:37.999 "nvme_io_md": false, 00:10:37.999 "write_zeroes": true, 00:10:37.999 "zcopy": true, 00:10:37.999 "get_zone_info": false, 00:10:37.999 "zone_management": false, 00:10:37.999 "zone_append": false, 00:10:37.999 "compare": false, 00:10:37.999 "compare_and_write": false, 00:10:37.999 "abort": true, 00:10:37.999 "seek_hole": false, 00:10:37.999 "seek_data": false, 00:10:37.999 "copy": true, 00:10:37.999 "nvme_iov_md": false 00:10:37.999 }, 00:10:37.999 "memory_domains": [ 00:10:37.999 { 00:10:37.999 "dma_device_id": "system", 00:10:37.999 "dma_device_type": 1 00:10:37.999 }, 00:10:37.999 { 00:10:37.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.999 "dma_device_type": 2 00:10:37.999 } 00:10:37.999 ], 00:10:37.999 "driver_specific": {} 00:10:37.999 } 00:10:37.999 ] 00:10:37.999 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.000 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:38.000 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:38.000 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.000 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.000 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.000 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.000 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.000 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.000 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.000 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.000 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.000 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.000 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.000 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.000 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.000 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.000 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.000 "name": "Existed_Raid", 00:10:38.000 "uuid": "ce63d1ea-c123-4175-aea9-2a1e35098ab6", 00:10:38.000 "strip_size_kb": 64, 00:10:38.000 "state": "configuring", 00:10:38.000 "raid_level": "concat", 00:10:38.000 "superblock": true, 00:10:38.000 "num_base_bdevs": 4, 00:10:38.000 "num_base_bdevs_discovered": 3, 00:10:38.000 "num_base_bdevs_operational": 4, 00:10:38.000 "base_bdevs_list": [ 00:10:38.000 { 00:10:38.000 "name": "BaseBdev1", 00:10:38.000 "uuid": "fff2a498-9e1a-4485-b7eb-476c2bcef2e7", 00:10:38.000 "is_configured": true, 00:10:38.000 "data_offset": 2048, 00:10:38.000 "data_size": 63488 00:10:38.000 }, 00:10:38.000 { 00:10:38.000 "name": null, 00:10:38.000 "uuid": "5197b551-5498-49dc-81fd-d6d768438387", 00:10:38.000 "is_configured": false, 00:10:38.000 "data_offset": 0, 00:10:38.000 "data_size": 63488 00:10:38.000 }, 00:10:38.000 { 00:10:38.000 "name": "BaseBdev3", 00:10:38.000 "uuid": "75aff538-82a4-468c-8318-7defc037cb79", 00:10:38.000 "is_configured": true, 00:10:38.000 "data_offset": 2048, 00:10:38.000 "data_size": 63488 00:10:38.000 }, 00:10:38.000 { 00:10:38.000 "name": "BaseBdev4", 00:10:38.000 "uuid": "d83a4504-22fa-4d9f-afb2-27298bc0e85a", 00:10:38.000 "is_configured": true, 00:10:38.000 "data_offset": 2048, 00:10:38.000 "data_size": 63488 00:10:38.000 } 00:10:38.000 ] 00:10:38.000 }' 00:10:38.000 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.000 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.260 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:38.260 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.260 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.260 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.260 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.260 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:38.260 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:38.260 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.260 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.260 [2024-11-18 03:10:41.829776] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:38.260 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.260 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:38.519 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.519 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.519 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.519 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.519 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.519 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.519 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.519 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.519 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.519 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.519 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.519 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.519 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.519 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.519 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.519 "name": "Existed_Raid", 00:10:38.519 "uuid": "ce63d1ea-c123-4175-aea9-2a1e35098ab6", 00:10:38.519 "strip_size_kb": 64, 00:10:38.519 "state": "configuring", 00:10:38.519 "raid_level": "concat", 00:10:38.519 "superblock": true, 00:10:38.519 "num_base_bdevs": 4, 00:10:38.519 "num_base_bdevs_discovered": 2, 00:10:38.519 "num_base_bdevs_operational": 4, 00:10:38.519 "base_bdevs_list": [ 00:10:38.519 { 00:10:38.519 "name": "BaseBdev1", 00:10:38.519 "uuid": "fff2a498-9e1a-4485-b7eb-476c2bcef2e7", 00:10:38.519 "is_configured": true, 00:10:38.519 "data_offset": 2048, 00:10:38.519 "data_size": 63488 00:10:38.519 }, 00:10:38.519 { 00:10:38.519 "name": null, 00:10:38.519 "uuid": "5197b551-5498-49dc-81fd-d6d768438387", 00:10:38.519 "is_configured": false, 00:10:38.519 "data_offset": 0, 00:10:38.519 "data_size": 63488 00:10:38.519 }, 00:10:38.519 { 00:10:38.519 "name": null, 00:10:38.519 "uuid": "75aff538-82a4-468c-8318-7defc037cb79", 00:10:38.519 "is_configured": false, 00:10:38.519 "data_offset": 0, 00:10:38.519 "data_size": 63488 00:10:38.519 }, 00:10:38.519 { 00:10:38.519 "name": "BaseBdev4", 00:10:38.519 "uuid": "d83a4504-22fa-4d9f-afb2-27298bc0e85a", 00:10:38.519 "is_configured": true, 00:10:38.519 "data_offset": 2048, 00:10:38.519 "data_size": 63488 00:10:38.519 } 00:10:38.519 ] 00:10:38.519 }' 00:10:38.519 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.519 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.779 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.779 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:38.779 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.779 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.779 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.039 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:39.039 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:39.039 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.039 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.039 [2024-11-18 03:10:42.368884] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:39.039 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.039 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:39.039 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.039 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.039 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.039 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.039 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.039 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.039 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.039 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.039 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.039 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.039 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.039 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.039 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.039 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.039 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.039 "name": "Existed_Raid", 00:10:39.039 "uuid": "ce63d1ea-c123-4175-aea9-2a1e35098ab6", 00:10:39.039 "strip_size_kb": 64, 00:10:39.039 "state": "configuring", 00:10:39.039 "raid_level": "concat", 00:10:39.039 "superblock": true, 00:10:39.039 "num_base_bdevs": 4, 00:10:39.039 "num_base_bdevs_discovered": 3, 00:10:39.039 "num_base_bdevs_operational": 4, 00:10:39.039 "base_bdevs_list": [ 00:10:39.039 { 00:10:39.039 "name": "BaseBdev1", 00:10:39.039 "uuid": "fff2a498-9e1a-4485-b7eb-476c2bcef2e7", 00:10:39.039 "is_configured": true, 00:10:39.039 "data_offset": 2048, 00:10:39.039 "data_size": 63488 00:10:39.039 }, 00:10:39.039 { 00:10:39.039 "name": null, 00:10:39.039 "uuid": "5197b551-5498-49dc-81fd-d6d768438387", 00:10:39.039 "is_configured": false, 00:10:39.039 "data_offset": 0, 00:10:39.039 "data_size": 63488 00:10:39.039 }, 00:10:39.039 { 00:10:39.039 "name": "BaseBdev3", 00:10:39.039 "uuid": "75aff538-82a4-468c-8318-7defc037cb79", 00:10:39.039 "is_configured": true, 00:10:39.039 "data_offset": 2048, 00:10:39.039 "data_size": 63488 00:10:39.039 }, 00:10:39.039 { 00:10:39.039 "name": "BaseBdev4", 00:10:39.039 "uuid": "d83a4504-22fa-4d9f-afb2-27298bc0e85a", 00:10:39.039 "is_configured": true, 00:10:39.039 "data_offset": 2048, 00:10:39.039 "data_size": 63488 00:10:39.039 } 00:10:39.039 ] 00:10:39.039 }' 00:10:39.039 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.039 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.300 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.300 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:39.300 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.300 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.300 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.300 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:39.300 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:39.300 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.300 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.300 [2024-11-18 03:10:42.864110] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:39.560 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.560 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:39.560 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.560 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.560 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.560 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.560 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.560 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.560 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.560 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.560 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.560 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.560 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.560 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.560 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.560 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.560 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.560 "name": "Existed_Raid", 00:10:39.560 "uuid": "ce63d1ea-c123-4175-aea9-2a1e35098ab6", 00:10:39.560 "strip_size_kb": 64, 00:10:39.560 "state": "configuring", 00:10:39.560 "raid_level": "concat", 00:10:39.560 "superblock": true, 00:10:39.560 "num_base_bdevs": 4, 00:10:39.560 "num_base_bdevs_discovered": 2, 00:10:39.560 "num_base_bdevs_operational": 4, 00:10:39.560 "base_bdevs_list": [ 00:10:39.560 { 00:10:39.560 "name": null, 00:10:39.560 "uuid": "fff2a498-9e1a-4485-b7eb-476c2bcef2e7", 00:10:39.560 "is_configured": false, 00:10:39.560 "data_offset": 0, 00:10:39.560 "data_size": 63488 00:10:39.560 }, 00:10:39.560 { 00:10:39.560 "name": null, 00:10:39.560 "uuid": "5197b551-5498-49dc-81fd-d6d768438387", 00:10:39.560 "is_configured": false, 00:10:39.560 "data_offset": 0, 00:10:39.560 "data_size": 63488 00:10:39.560 }, 00:10:39.560 { 00:10:39.560 "name": "BaseBdev3", 00:10:39.560 "uuid": "75aff538-82a4-468c-8318-7defc037cb79", 00:10:39.560 "is_configured": true, 00:10:39.560 "data_offset": 2048, 00:10:39.560 "data_size": 63488 00:10:39.560 }, 00:10:39.560 { 00:10:39.560 "name": "BaseBdev4", 00:10:39.560 "uuid": "d83a4504-22fa-4d9f-afb2-27298bc0e85a", 00:10:39.560 "is_configured": true, 00:10:39.560 "data_offset": 2048, 00:10:39.560 "data_size": 63488 00:10:39.560 } 00:10:39.560 ] 00:10:39.560 }' 00:10:39.560 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.560 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.820 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:39.820 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.820 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.820 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.820 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.820 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:39.820 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:39.820 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.820 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.820 [2024-11-18 03:10:43.385824] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.820 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.820 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:39.820 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.820 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.820 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.820 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.820 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.820 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.820 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.820 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.820 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.080 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.080 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.080 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.080 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.080 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.080 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.080 "name": "Existed_Raid", 00:10:40.080 "uuid": "ce63d1ea-c123-4175-aea9-2a1e35098ab6", 00:10:40.080 "strip_size_kb": 64, 00:10:40.080 "state": "configuring", 00:10:40.080 "raid_level": "concat", 00:10:40.080 "superblock": true, 00:10:40.080 "num_base_bdevs": 4, 00:10:40.080 "num_base_bdevs_discovered": 3, 00:10:40.080 "num_base_bdevs_operational": 4, 00:10:40.080 "base_bdevs_list": [ 00:10:40.080 { 00:10:40.080 "name": null, 00:10:40.080 "uuid": "fff2a498-9e1a-4485-b7eb-476c2bcef2e7", 00:10:40.080 "is_configured": false, 00:10:40.080 "data_offset": 0, 00:10:40.080 "data_size": 63488 00:10:40.080 }, 00:10:40.080 { 00:10:40.080 "name": "BaseBdev2", 00:10:40.080 "uuid": "5197b551-5498-49dc-81fd-d6d768438387", 00:10:40.080 "is_configured": true, 00:10:40.080 "data_offset": 2048, 00:10:40.080 "data_size": 63488 00:10:40.080 }, 00:10:40.080 { 00:10:40.080 "name": "BaseBdev3", 00:10:40.080 "uuid": "75aff538-82a4-468c-8318-7defc037cb79", 00:10:40.080 "is_configured": true, 00:10:40.080 "data_offset": 2048, 00:10:40.080 "data_size": 63488 00:10:40.080 }, 00:10:40.080 { 00:10:40.080 "name": "BaseBdev4", 00:10:40.080 "uuid": "d83a4504-22fa-4d9f-afb2-27298bc0e85a", 00:10:40.080 "is_configured": true, 00:10:40.080 "data_offset": 2048, 00:10:40.080 "data_size": 63488 00:10:40.080 } 00:10:40.080 ] 00:10:40.080 }' 00:10:40.080 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.080 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.340 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:40.340 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.340 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.340 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.340 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.340 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:40.340 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.340 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.340 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.340 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:40.340 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.601 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fff2a498-9e1a-4485-b7eb-476c2bcef2e7 00:10:40.601 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.602 [2024-11-18 03:10:43.939937] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:40.602 [2024-11-18 03:10:43.940214] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:40.602 [2024-11-18 03:10:43.940250] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:40.602 [2024-11-18 03:10:43.940523] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:40.602 NewBaseBdev 00:10:40.602 [2024-11-18 03:10:43.940673] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:40.602 [2024-11-18 03:10:43.940688] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:40.602 [2024-11-18 03:10:43.940779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.602 [ 00:10:40.602 { 00:10:40.602 "name": "NewBaseBdev", 00:10:40.602 "aliases": [ 00:10:40.602 "fff2a498-9e1a-4485-b7eb-476c2bcef2e7" 00:10:40.602 ], 00:10:40.602 "product_name": "Malloc disk", 00:10:40.602 "block_size": 512, 00:10:40.602 "num_blocks": 65536, 00:10:40.602 "uuid": "fff2a498-9e1a-4485-b7eb-476c2bcef2e7", 00:10:40.602 "assigned_rate_limits": { 00:10:40.602 "rw_ios_per_sec": 0, 00:10:40.602 "rw_mbytes_per_sec": 0, 00:10:40.602 "r_mbytes_per_sec": 0, 00:10:40.602 "w_mbytes_per_sec": 0 00:10:40.602 }, 00:10:40.602 "claimed": true, 00:10:40.602 "claim_type": "exclusive_write", 00:10:40.602 "zoned": false, 00:10:40.602 "supported_io_types": { 00:10:40.602 "read": true, 00:10:40.602 "write": true, 00:10:40.602 "unmap": true, 00:10:40.602 "flush": true, 00:10:40.602 "reset": true, 00:10:40.602 "nvme_admin": false, 00:10:40.602 "nvme_io": false, 00:10:40.602 "nvme_io_md": false, 00:10:40.602 "write_zeroes": true, 00:10:40.602 "zcopy": true, 00:10:40.602 "get_zone_info": false, 00:10:40.602 "zone_management": false, 00:10:40.602 "zone_append": false, 00:10:40.602 "compare": false, 00:10:40.602 "compare_and_write": false, 00:10:40.602 "abort": true, 00:10:40.602 "seek_hole": false, 00:10:40.602 "seek_data": false, 00:10:40.602 "copy": true, 00:10:40.602 "nvme_iov_md": false 00:10:40.602 }, 00:10:40.602 "memory_domains": [ 00:10:40.602 { 00:10:40.602 "dma_device_id": "system", 00:10:40.602 "dma_device_type": 1 00:10:40.602 }, 00:10:40.602 { 00:10:40.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.602 "dma_device_type": 2 00:10:40.602 } 00:10:40.602 ], 00:10:40.602 "driver_specific": {} 00:10:40.602 } 00:10:40.602 ] 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.602 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.602 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.602 "name": "Existed_Raid", 00:10:40.602 "uuid": "ce63d1ea-c123-4175-aea9-2a1e35098ab6", 00:10:40.602 "strip_size_kb": 64, 00:10:40.602 "state": "online", 00:10:40.602 "raid_level": "concat", 00:10:40.602 "superblock": true, 00:10:40.602 "num_base_bdevs": 4, 00:10:40.602 "num_base_bdevs_discovered": 4, 00:10:40.602 "num_base_bdevs_operational": 4, 00:10:40.602 "base_bdevs_list": [ 00:10:40.602 { 00:10:40.602 "name": "NewBaseBdev", 00:10:40.602 "uuid": "fff2a498-9e1a-4485-b7eb-476c2bcef2e7", 00:10:40.602 "is_configured": true, 00:10:40.602 "data_offset": 2048, 00:10:40.602 "data_size": 63488 00:10:40.602 }, 00:10:40.602 { 00:10:40.602 "name": "BaseBdev2", 00:10:40.602 "uuid": "5197b551-5498-49dc-81fd-d6d768438387", 00:10:40.602 "is_configured": true, 00:10:40.602 "data_offset": 2048, 00:10:40.602 "data_size": 63488 00:10:40.602 }, 00:10:40.602 { 00:10:40.602 "name": "BaseBdev3", 00:10:40.602 "uuid": "75aff538-82a4-468c-8318-7defc037cb79", 00:10:40.603 "is_configured": true, 00:10:40.603 "data_offset": 2048, 00:10:40.603 "data_size": 63488 00:10:40.603 }, 00:10:40.603 { 00:10:40.603 "name": "BaseBdev4", 00:10:40.603 "uuid": "d83a4504-22fa-4d9f-afb2-27298bc0e85a", 00:10:40.603 "is_configured": true, 00:10:40.603 "data_offset": 2048, 00:10:40.603 "data_size": 63488 00:10:40.603 } 00:10:40.603 ] 00:10:40.603 }' 00:10:40.603 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.603 03:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.173 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:41.173 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:41.173 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:41.173 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:41.173 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:41.173 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:41.173 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:41.173 03:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.173 03:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.173 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:41.173 [2024-11-18 03:10:44.459447] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:41.173 03:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.173 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:41.173 "name": "Existed_Raid", 00:10:41.173 "aliases": [ 00:10:41.173 "ce63d1ea-c123-4175-aea9-2a1e35098ab6" 00:10:41.173 ], 00:10:41.173 "product_name": "Raid Volume", 00:10:41.173 "block_size": 512, 00:10:41.173 "num_blocks": 253952, 00:10:41.173 "uuid": "ce63d1ea-c123-4175-aea9-2a1e35098ab6", 00:10:41.173 "assigned_rate_limits": { 00:10:41.173 "rw_ios_per_sec": 0, 00:10:41.173 "rw_mbytes_per_sec": 0, 00:10:41.173 "r_mbytes_per_sec": 0, 00:10:41.173 "w_mbytes_per_sec": 0 00:10:41.173 }, 00:10:41.173 "claimed": false, 00:10:41.173 "zoned": false, 00:10:41.173 "supported_io_types": { 00:10:41.173 "read": true, 00:10:41.173 "write": true, 00:10:41.173 "unmap": true, 00:10:41.173 "flush": true, 00:10:41.173 "reset": true, 00:10:41.173 "nvme_admin": false, 00:10:41.173 "nvme_io": false, 00:10:41.173 "nvme_io_md": false, 00:10:41.173 "write_zeroes": true, 00:10:41.173 "zcopy": false, 00:10:41.173 "get_zone_info": false, 00:10:41.173 "zone_management": false, 00:10:41.173 "zone_append": false, 00:10:41.173 "compare": false, 00:10:41.173 "compare_and_write": false, 00:10:41.173 "abort": false, 00:10:41.173 "seek_hole": false, 00:10:41.173 "seek_data": false, 00:10:41.173 "copy": false, 00:10:41.173 "nvme_iov_md": false 00:10:41.173 }, 00:10:41.173 "memory_domains": [ 00:10:41.173 { 00:10:41.173 "dma_device_id": "system", 00:10:41.173 "dma_device_type": 1 00:10:41.173 }, 00:10:41.173 { 00:10:41.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.173 "dma_device_type": 2 00:10:41.173 }, 00:10:41.173 { 00:10:41.173 "dma_device_id": "system", 00:10:41.173 "dma_device_type": 1 00:10:41.173 }, 00:10:41.173 { 00:10:41.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.173 "dma_device_type": 2 00:10:41.173 }, 00:10:41.173 { 00:10:41.173 "dma_device_id": "system", 00:10:41.173 "dma_device_type": 1 00:10:41.173 }, 00:10:41.173 { 00:10:41.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.173 "dma_device_type": 2 00:10:41.173 }, 00:10:41.173 { 00:10:41.173 "dma_device_id": "system", 00:10:41.173 "dma_device_type": 1 00:10:41.173 }, 00:10:41.173 { 00:10:41.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.173 "dma_device_type": 2 00:10:41.173 } 00:10:41.173 ], 00:10:41.173 "driver_specific": { 00:10:41.173 "raid": { 00:10:41.173 "uuid": "ce63d1ea-c123-4175-aea9-2a1e35098ab6", 00:10:41.173 "strip_size_kb": 64, 00:10:41.173 "state": "online", 00:10:41.173 "raid_level": "concat", 00:10:41.173 "superblock": true, 00:10:41.173 "num_base_bdevs": 4, 00:10:41.173 "num_base_bdevs_discovered": 4, 00:10:41.173 "num_base_bdevs_operational": 4, 00:10:41.173 "base_bdevs_list": [ 00:10:41.173 { 00:10:41.173 "name": "NewBaseBdev", 00:10:41.173 "uuid": "fff2a498-9e1a-4485-b7eb-476c2bcef2e7", 00:10:41.174 "is_configured": true, 00:10:41.174 "data_offset": 2048, 00:10:41.174 "data_size": 63488 00:10:41.174 }, 00:10:41.174 { 00:10:41.174 "name": "BaseBdev2", 00:10:41.174 "uuid": "5197b551-5498-49dc-81fd-d6d768438387", 00:10:41.174 "is_configured": true, 00:10:41.174 "data_offset": 2048, 00:10:41.174 "data_size": 63488 00:10:41.174 }, 00:10:41.174 { 00:10:41.174 "name": "BaseBdev3", 00:10:41.174 "uuid": "75aff538-82a4-468c-8318-7defc037cb79", 00:10:41.174 "is_configured": true, 00:10:41.174 "data_offset": 2048, 00:10:41.174 "data_size": 63488 00:10:41.174 }, 00:10:41.174 { 00:10:41.174 "name": "BaseBdev4", 00:10:41.174 "uuid": "d83a4504-22fa-4d9f-afb2-27298bc0e85a", 00:10:41.174 "is_configured": true, 00:10:41.174 "data_offset": 2048, 00:10:41.174 "data_size": 63488 00:10:41.174 } 00:10:41.174 ] 00:10:41.174 } 00:10:41.174 } 00:10:41.174 }' 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:41.174 BaseBdev2 00:10:41.174 BaseBdev3 00:10:41.174 BaseBdev4' 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.174 03:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.447 03:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.447 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.447 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.447 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:41.447 03:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.447 03:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.447 [2024-11-18 03:10:44.798518] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:41.447 [2024-11-18 03:10:44.798547] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:41.447 [2024-11-18 03:10:44.798622] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.447 [2024-11-18 03:10:44.798692] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:41.447 [2024-11-18 03:10:44.798702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:41.447 03:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.447 03:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 82967 00:10:41.447 03:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 82967 ']' 00:10:41.447 03:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 82967 00:10:41.447 03:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:41.447 03:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:41.447 03:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82967 00:10:41.447 killing process with pid 82967 00:10:41.447 03:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:41.447 03:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:41.448 03:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82967' 00:10:41.448 03:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 82967 00:10:41.448 [2024-11-18 03:10:44.845160] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:41.448 03:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 82967 00:10:41.448 [2024-11-18 03:10:44.886418] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:41.727 03:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:41.727 00:10:41.727 real 0m9.901s 00:10:41.727 user 0m16.927s 00:10:41.727 sys 0m2.069s 00:10:41.727 03:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:41.727 03:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.727 ************************************ 00:10:41.727 END TEST raid_state_function_test_sb 00:10:41.727 ************************************ 00:10:41.727 03:10:45 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:10:41.727 03:10:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:41.727 03:10:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:41.727 03:10:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:41.727 ************************************ 00:10:41.727 START TEST raid_superblock_test 00:10:41.727 ************************************ 00:10:41.727 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:10:41.727 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:41.727 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:41.727 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:41.727 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:41.727 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:41.727 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:41.727 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:41.727 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:41.727 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:41.727 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:41.727 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:41.727 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:41.727 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:41.727 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:41.727 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:41.727 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:41.727 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83624 00:10:41.727 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:41.727 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83624 00:10:41.727 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 83624 ']' 00:10:41.727 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.727 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:41.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.727 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.727 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:41.727 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.727 [2024-11-18 03:10:45.288772] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:41.727 [2024-11-18 03:10:45.288913] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83624 ] 00:10:42.001 [2024-11-18 03:10:45.450579] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.001 [2024-11-18 03:10:45.501247] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.001 [2024-11-18 03:10:45.544444] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.001 [2024-11-18 03:10:45.544480] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.572 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:42.572 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:42.572 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:42.572 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:42.572 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:42.572 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:42.572 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:42.572 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:42.572 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:42.572 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:42.572 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:42.572 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.572 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.572 malloc1 00:10:42.572 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.572 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:42.572 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.572 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.833 [2024-11-18 03:10:46.151029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:42.833 [2024-11-18 03:10:46.151174] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.833 [2024-11-18 03:10:46.151212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:42.833 [2024-11-18 03:10:46.151246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.833 [2024-11-18 03:10:46.153432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.833 [2024-11-18 03:10:46.153507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:42.833 pt1 00:10:42.833 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.833 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:42.833 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:42.833 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:42.833 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:42.833 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:42.833 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:42.833 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:42.833 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:42.833 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:42.833 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.833 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.833 malloc2 00:10:42.833 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.833 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:42.833 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.833 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.833 [2024-11-18 03:10:46.193518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:42.833 [2024-11-18 03:10:46.193647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.833 [2024-11-18 03:10:46.193690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:42.833 [2024-11-18 03:10:46.193736] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.833 [2024-11-18 03:10:46.196008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.833 [2024-11-18 03:10:46.196085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:42.833 pt2 00:10:42.833 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.833 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:42.833 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:42.833 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:42.833 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:42.833 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:42.833 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:42.833 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:42.833 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:42.833 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.834 malloc3 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.834 [2024-11-18 03:10:46.222355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:42.834 [2024-11-18 03:10:46.222460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.834 [2024-11-18 03:10:46.222496] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:42.834 [2024-11-18 03:10:46.222568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.834 [2024-11-18 03:10:46.224936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.834 [2024-11-18 03:10:46.225036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:42.834 pt3 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.834 malloc4 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.834 [2024-11-18 03:10:46.255619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:42.834 [2024-11-18 03:10:46.255733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.834 [2024-11-18 03:10:46.255790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:42.834 [2024-11-18 03:10:46.255837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.834 [2024-11-18 03:10:46.258218] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.834 [2024-11-18 03:10:46.258298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:42.834 pt4 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.834 [2024-11-18 03:10:46.267686] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:42.834 [2024-11-18 03:10:46.269698] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:42.834 [2024-11-18 03:10:46.269795] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:42.834 [2024-11-18 03:10:46.269879] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:42.834 [2024-11-18 03:10:46.270110] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:42.834 [2024-11-18 03:10:46.270165] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:42.834 [2024-11-18 03:10:46.270476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:42.834 [2024-11-18 03:10:46.270647] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:42.834 [2024-11-18 03:10:46.270695] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:42.834 [2024-11-18 03:10:46.270888] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.834 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.834 "name": "raid_bdev1", 00:10:42.834 "uuid": "a66e9a7b-18f1-4ee9-841a-b65f5ea88643", 00:10:42.834 "strip_size_kb": 64, 00:10:42.834 "state": "online", 00:10:42.834 "raid_level": "concat", 00:10:42.834 "superblock": true, 00:10:42.834 "num_base_bdevs": 4, 00:10:42.834 "num_base_bdevs_discovered": 4, 00:10:42.834 "num_base_bdevs_operational": 4, 00:10:42.834 "base_bdevs_list": [ 00:10:42.834 { 00:10:42.834 "name": "pt1", 00:10:42.834 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:42.834 "is_configured": true, 00:10:42.834 "data_offset": 2048, 00:10:42.834 "data_size": 63488 00:10:42.834 }, 00:10:42.834 { 00:10:42.834 "name": "pt2", 00:10:42.834 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:42.834 "is_configured": true, 00:10:42.834 "data_offset": 2048, 00:10:42.834 "data_size": 63488 00:10:42.834 }, 00:10:42.834 { 00:10:42.834 "name": "pt3", 00:10:42.835 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:42.835 "is_configured": true, 00:10:42.835 "data_offset": 2048, 00:10:42.835 "data_size": 63488 00:10:42.835 }, 00:10:42.835 { 00:10:42.835 "name": "pt4", 00:10:42.835 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:42.835 "is_configured": true, 00:10:42.835 "data_offset": 2048, 00:10:42.835 "data_size": 63488 00:10:42.835 } 00:10:42.835 ] 00:10:42.835 }' 00:10:42.835 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.835 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.405 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:43.405 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:43.405 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:43.405 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:43.405 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:43.405 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:43.405 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:43.405 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.405 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.405 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:43.405 [2024-11-18 03:10:46.735331] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:43.405 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.405 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:43.405 "name": "raid_bdev1", 00:10:43.405 "aliases": [ 00:10:43.405 "a66e9a7b-18f1-4ee9-841a-b65f5ea88643" 00:10:43.405 ], 00:10:43.405 "product_name": "Raid Volume", 00:10:43.405 "block_size": 512, 00:10:43.405 "num_blocks": 253952, 00:10:43.405 "uuid": "a66e9a7b-18f1-4ee9-841a-b65f5ea88643", 00:10:43.405 "assigned_rate_limits": { 00:10:43.405 "rw_ios_per_sec": 0, 00:10:43.405 "rw_mbytes_per_sec": 0, 00:10:43.405 "r_mbytes_per_sec": 0, 00:10:43.405 "w_mbytes_per_sec": 0 00:10:43.405 }, 00:10:43.405 "claimed": false, 00:10:43.405 "zoned": false, 00:10:43.405 "supported_io_types": { 00:10:43.405 "read": true, 00:10:43.405 "write": true, 00:10:43.405 "unmap": true, 00:10:43.405 "flush": true, 00:10:43.405 "reset": true, 00:10:43.405 "nvme_admin": false, 00:10:43.405 "nvme_io": false, 00:10:43.405 "nvme_io_md": false, 00:10:43.405 "write_zeroes": true, 00:10:43.405 "zcopy": false, 00:10:43.405 "get_zone_info": false, 00:10:43.405 "zone_management": false, 00:10:43.405 "zone_append": false, 00:10:43.405 "compare": false, 00:10:43.405 "compare_and_write": false, 00:10:43.405 "abort": false, 00:10:43.405 "seek_hole": false, 00:10:43.405 "seek_data": false, 00:10:43.405 "copy": false, 00:10:43.405 "nvme_iov_md": false 00:10:43.405 }, 00:10:43.405 "memory_domains": [ 00:10:43.405 { 00:10:43.405 "dma_device_id": "system", 00:10:43.405 "dma_device_type": 1 00:10:43.405 }, 00:10:43.405 { 00:10:43.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.405 "dma_device_type": 2 00:10:43.405 }, 00:10:43.405 { 00:10:43.405 "dma_device_id": "system", 00:10:43.405 "dma_device_type": 1 00:10:43.405 }, 00:10:43.405 { 00:10:43.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.405 "dma_device_type": 2 00:10:43.405 }, 00:10:43.405 { 00:10:43.405 "dma_device_id": "system", 00:10:43.405 "dma_device_type": 1 00:10:43.405 }, 00:10:43.405 { 00:10:43.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.405 "dma_device_type": 2 00:10:43.405 }, 00:10:43.405 { 00:10:43.405 "dma_device_id": "system", 00:10:43.405 "dma_device_type": 1 00:10:43.405 }, 00:10:43.405 { 00:10:43.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.405 "dma_device_type": 2 00:10:43.405 } 00:10:43.405 ], 00:10:43.405 "driver_specific": { 00:10:43.405 "raid": { 00:10:43.405 "uuid": "a66e9a7b-18f1-4ee9-841a-b65f5ea88643", 00:10:43.405 "strip_size_kb": 64, 00:10:43.405 "state": "online", 00:10:43.405 "raid_level": "concat", 00:10:43.405 "superblock": true, 00:10:43.405 "num_base_bdevs": 4, 00:10:43.405 "num_base_bdevs_discovered": 4, 00:10:43.405 "num_base_bdevs_operational": 4, 00:10:43.405 "base_bdevs_list": [ 00:10:43.405 { 00:10:43.405 "name": "pt1", 00:10:43.405 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:43.405 "is_configured": true, 00:10:43.405 "data_offset": 2048, 00:10:43.405 "data_size": 63488 00:10:43.405 }, 00:10:43.405 { 00:10:43.405 "name": "pt2", 00:10:43.405 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:43.405 "is_configured": true, 00:10:43.405 "data_offset": 2048, 00:10:43.405 "data_size": 63488 00:10:43.405 }, 00:10:43.405 { 00:10:43.405 "name": "pt3", 00:10:43.405 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:43.405 "is_configured": true, 00:10:43.405 "data_offset": 2048, 00:10:43.405 "data_size": 63488 00:10:43.405 }, 00:10:43.405 { 00:10:43.405 "name": "pt4", 00:10:43.405 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:43.405 "is_configured": true, 00:10:43.405 "data_offset": 2048, 00:10:43.405 "data_size": 63488 00:10:43.405 } 00:10:43.405 ] 00:10:43.405 } 00:10:43.405 } 00:10:43.405 }' 00:10:43.405 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:43.405 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:43.405 pt2 00:10:43.405 pt3 00:10:43.405 pt4' 00:10:43.405 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.405 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:43.405 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.405 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:43.405 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.405 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.405 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.405 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.405 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.405 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.405 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.405 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.405 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:43.405 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.405 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.405 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.666 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.666 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.666 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.666 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:43.666 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.666 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.666 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:43.666 [2024-11-18 03:10:47.106535] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a66e9a7b-18f1-4ee9-841a-b65f5ea88643 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a66e9a7b-18f1-4ee9-841a-b65f5ea88643 ']' 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.666 [2024-11-18 03:10:47.154123] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:43.666 [2024-11-18 03:10:47.154205] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:43.666 [2024-11-18 03:10:47.154297] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:43.666 [2024-11-18 03:10:47.154402] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:43.666 [2024-11-18 03:10:47.154454] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.666 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.927 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.927 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:43.927 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:43.927 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.927 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.927 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.927 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:43.927 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:43.927 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.927 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.927 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.927 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:43.927 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:43.927 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:43.927 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:43.927 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:43.927 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:43.927 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:43.927 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:43.927 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:43.927 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.927 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.927 [2024-11-18 03:10:47.321907] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:43.927 [2024-11-18 03:10:47.324049] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:43.927 [2024-11-18 03:10:47.324147] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:43.927 [2024-11-18 03:10:47.324214] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:43.927 [2024-11-18 03:10:47.324299] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:43.927 [2024-11-18 03:10:47.324391] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:43.927 [2024-11-18 03:10:47.324460] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:43.927 [2024-11-18 03:10:47.324538] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:43.927 [2024-11-18 03:10:47.324592] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:43.928 [2024-11-18 03:10:47.324645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:10:43.928 request: 00:10:43.928 { 00:10:43.928 "name": "raid_bdev1", 00:10:43.928 "raid_level": "concat", 00:10:43.928 "base_bdevs": [ 00:10:43.928 "malloc1", 00:10:43.928 "malloc2", 00:10:43.928 "malloc3", 00:10:43.928 "malloc4" 00:10:43.928 ], 00:10:43.928 "strip_size_kb": 64, 00:10:43.928 "superblock": false, 00:10:43.928 "method": "bdev_raid_create", 00:10:43.928 "req_id": 1 00:10:43.928 } 00:10:43.928 Got JSON-RPC error response 00:10:43.928 response: 00:10:43.928 { 00:10:43.928 "code": -17, 00:10:43.928 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:43.928 } 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.928 [2024-11-18 03:10:47.385742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:43.928 [2024-11-18 03:10:47.385837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.928 [2024-11-18 03:10:47.385877] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:43.928 [2024-11-18 03:10:47.385904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.928 [2024-11-18 03:10:47.388266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.928 [2024-11-18 03:10:47.388340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:43.928 [2024-11-18 03:10:47.388443] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:43.928 [2024-11-18 03:10:47.388526] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:43.928 pt1 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.928 "name": "raid_bdev1", 00:10:43.928 "uuid": "a66e9a7b-18f1-4ee9-841a-b65f5ea88643", 00:10:43.928 "strip_size_kb": 64, 00:10:43.928 "state": "configuring", 00:10:43.928 "raid_level": "concat", 00:10:43.928 "superblock": true, 00:10:43.928 "num_base_bdevs": 4, 00:10:43.928 "num_base_bdevs_discovered": 1, 00:10:43.928 "num_base_bdevs_operational": 4, 00:10:43.928 "base_bdevs_list": [ 00:10:43.928 { 00:10:43.928 "name": "pt1", 00:10:43.928 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:43.928 "is_configured": true, 00:10:43.928 "data_offset": 2048, 00:10:43.928 "data_size": 63488 00:10:43.928 }, 00:10:43.928 { 00:10:43.928 "name": null, 00:10:43.928 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:43.928 "is_configured": false, 00:10:43.928 "data_offset": 2048, 00:10:43.928 "data_size": 63488 00:10:43.928 }, 00:10:43.928 { 00:10:43.928 "name": null, 00:10:43.928 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:43.928 "is_configured": false, 00:10:43.928 "data_offset": 2048, 00:10:43.928 "data_size": 63488 00:10:43.928 }, 00:10:43.928 { 00:10:43.928 "name": null, 00:10:43.928 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:43.928 "is_configured": false, 00:10:43.928 "data_offset": 2048, 00:10:43.928 "data_size": 63488 00:10:43.928 } 00:10:43.928 ] 00:10:43.928 }' 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.928 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.499 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:44.499 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:44.499 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.499 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.499 [2024-11-18 03:10:47.805038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:44.499 [2024-11-18 03:10:47.805143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.499 [2024-11-18 03:10:47.805181] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:44.499 [2024-11-18 03:10:47.805209] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.499 [2024-11-18 03:10:47.805629] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.499 [2024-11-18 03:10:47.805685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:44.499 [2024-11-18 03:10:47.805783] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:44.499 [2024-11-18 03:10:47.805832] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:44.499 pt2 00:10:44.499 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.499 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:44.499 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.499 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.499 [2024-11-18 03:10:47.817006] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:44.499 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.499 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:44.499 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.499 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.499 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.499 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.499 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.499 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.499 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.499 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.499 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.499 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.499 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.499 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.499 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.499 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.499 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.499 "name": "raid_bdev1", 00:10:44.499 "uuid": "a66e9a7b-18f1-4ee9-841a-b65f5ea88643", 00:10:44.499 "strip_size_kb": 64, 00:10:44.499 "state": "configuring", 00:10:44.499 "raid_level": "concat", 00:10:44.499 "superblock": true, 00:10:44.499 "num_base_bdevs": 4, 00:10:44.499 "num_base_bdevs_discovered": 1, 00:10:44.499 "num_base_bdevs_operational": 4, 00:10:44.499 "base_bdevs_list": [ 00:10:44.499 { 00:10:44.499 "name": "pt1", 00:10:44.499 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:44.499 "is_configured": true, 00:10:44.499 "data_offset": 2048, 00:10:44.499 "data_size": 63488 00:10:44.499 }, 00:10:44.499 { 00:10:44.499 "name": null, 00:10:44.499 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:44.499 "is_configured": false, 00:10:44.499 "data_offset": 0, 00:10:44.499 "data_size": 63488 00:10:44.499 }, 00:10:44.499 { 00:10:44.499 "name": null, 00:10:44.499 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:44.499 "is_configured": false, 00:10:44.499 "data_offset": 2048, 00:10:44.499 "data_size": 63488 00:10:44.499 }, 00:10:44.499 { 00:10:44.499 "name": null, 00:10:44.499 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:44.499 "is_configured": false, 00:10:44.499 "data_offset": 2048, 00:10:44.499 "data_size": 63488 00:10:44.499 } 00:10:44.499 ] 00:10:44.499 }' 00:10:44.499 03:10:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.499 03:10:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.759 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:44.759 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:44.759 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:44.759 03:10:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.759 03:10:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.759 [2024-11-18 03:10:48.316153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:44.759 [2024-11-18 03:10:48.316282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.759 [2024-11-18 03:10:48.316320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:44.759 [2024-11-18 03:10:48.316353] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.759 [2024-11-18 03:10:48.316790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.759 [2024-11-18 03:10:48.316854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:44.759 [2024-11-18 03:10:48.316978] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:44.759 [2024-11-18 03:10:48.317037] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:44.759 pt2 00:10:44.759 03:10:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.759 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:44.759 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:44.759 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:44.760 03:10:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.760 03:10:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.760 [2024-11-18 03:10:48.324088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:44.760 [2024-11-18 03:10:48.324183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.760 [2024-11-18 03:10:48.324245] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:44.760 [2024-11-18 03:10:48.324274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.760 [2024-11-18 03:10:48.324635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.760 [2024-11-18 03:10:48.324693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:44.760 [2024-11-18 03:10:48.324779] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:44.760 [2024-11-18 03:10:48.324827] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:44.760 pt3 00:10:44.760 03:10:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.760 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:44.760 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:44.760 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:44.760 03:10:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.760 03:10:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.760 [2024-11-18 03:10:48.332084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:44.760 [2024-11-18 03:10:48.332175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.760 [2024-11-18 03:10:48.332210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:44.760 [2024-11-18 03:10:48.332267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.760 [2024-11-18 03:10:48.332587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.760 [2024-11-18 03:10:48.332606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:44.760 [2024-11-18 03:10:48.332660] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:44.760 [2024-11-18 03:10:48.332681] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:44.760 [2024-11-18 03:10:48.332774] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:44.760 [2024-11-18 03:10:48.332787] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:44.760 [2024-11-18 03:10:48.333030] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:44.760 [2024-11-18 03:10:48.333145] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:44.760 [2024-11-18 03:10:48.333159] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:44.760 [2024-11-18 03:10:48.333254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.020 pt4 00:10:45.020 03:10:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.020 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:45.020 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:45.020 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:45.020 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.020 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.020 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.020 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.020 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.021 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.021 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.021 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.021 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.021 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.021 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.021 03:10:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.021 03:10:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.021 03:10:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.021 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.021 "name": "raid_bdev1", 00:10:45.021 "uuid": "a66e9a7b-18f1-4ee9-841a-b65f5ea88643", 00:10:45.021 "strip_size_kb": 64, 00:10:45.021 "state": "online", 00:10:45.021 "raid_level": "concat", 00:10:45.021 "superblock": true, 00:10:45.021 "num_base_bdevs": 4, 00:10:45.021 "num_base_bdevs_discovered": 4, 00:10:45.021 "num_base_bdevs_operational": 4, 00:10:45.021 "base_bdevs_list": [ 00:10:45.021 { 00:10:45.021 "name": "pt1", 00:10:45.021 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:45.021 "is_configured": true, 00:10:45.021 "data_offset": 2048, 00:10:45.021 "data_size": 63488 00:10:45.021 }, 00:10:45.021 { 00:10:45.021 "name": "pt2", 00:10:45.021 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:45.021 "is_configured": true, 00:10:45.021 "data_offset": 2048, 00:10:45.021 "data_size": 63488 00:10:45.021 }, 00:10:45.021 { 00:10:45.021 "name": "pt3", 00:10:45.021 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:45.021 "is_configured": true, 00:10:45.021 "data_offset": 2048, 00:10:45.021 "data_size": 63488 00:10:45.021 }, 00:10:45.021 { 00:10:45.021 "name": "pt4", 00:10:45.021 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:45.021 "is_configured": true, 00:10:45.021 "data_offset": 2048, 00:10:45.021 "data_size": 63488 00:10:45.021 } 00:10:45.021 ] 00:10:45.021 }' 00:10:45.021 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.021 03:10:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.281 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:45.281 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:45.281 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:45.281 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:45.281 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:45.281 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:45.281 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:45.281 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:45.281 03:10:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.281 03:10:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.281 [2024-11-18 03:10:48.803702] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:45.282 03:10:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.282 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:45.282 "name": "raid_bdev1", 00:10:45.282 "aliases": [ 00:10:45.282 "a66e9a7b-18f1-4ee9-841a-b65f5ea88643" 00:10:45.282 ], 00:10:45.282 "product_name": "Raid Volume", 00:10:45.282 "block_size": 512, 00:10:45.282 "num_blocks": 253952, 00:10:45.282 "uuid": "a66e9a7b-18f1-4ee9-841a-b65f5ea88643", 00:10:45.282 "assigned_rate_limits": { 00:10:45.282 "rw_ios_per_sec": 0, 00:10:45.282 "rw_mbytes_per_sec": 0, 00:10:45.282 "r_mbytes_per_sec": 0, 00:10:45.282 "w_mbytes_per_sec": 0 00:10:45.282 }, 00:10:45.282 "claimed": false, 00:10:45.282 "zoned": false, 00:10:45.282 "supported_io_types": { 00:10:45.282 "read": true, 00:10:45.282 "write": true, 00:10:45.282 "unmap": true, 00:10:45.282 "flush": true, 00:10:45.282 "reset": true, 00:10:45.282 "nvme_admin": false, 00:10:45.282 "nvme_io": false, 00:10:45.282 "nvme_io_md": false, 00:10:45.282 "write_zeroes": true, 00:10:45.282 "zcopy": false, 00:10:45.282 "get_zone_info": false, 00:10:45.282 "zone_management": false, 00:10:45.282 "zone_append": false, 00:10:45.282 "compare": false, 00:10:45.282 "compare_and_write": false, 00:10:45.282 "abort": false, 00:10:45.282 "seek_hole": false, 00:10:45.282 "seek_data": false, 00:10:45.282 "copy": false, 00:10:45.282 "nvme_iov_md": false 00:10:45.282 }, 00:10:45.282 "memory_domains": [ 00:10:45.282 { 00:10:45.282 "dma_device_id": "system", 00:10:45.282 "dma_device_type": 1 00:10:45.282 }, 00:10:45.282 { 00:10:45.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.282 "dma_device_type": 2 00:10:45.282 }, 00:10:45.282 { 00:10:45.282 "dma_device_id": "system", 00:10:45.282 "dma_device_type": 1 00:10:45.282 }, 00:10:45.282 { 00:10:45.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.282 "dma_device_type": 2 00:10:45.282 }, 00:10:45.282 { 00:10:45.282 "dma_device_id": "system", 00:10:45.282 "dma_device_type": 1 00:10:45.282 }, 00:10:45.282 { 00:10:45.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.282 "dma_device_type": 2 00:10:45.282 }, 00:10:45.282 { 00:10:45.282 "dma_device_id": "system", 00:10:45.282 "dma_device_type": 1 00:10:45.282 }, 00:10:45.282 { 00:10:45.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.282 "dma_device_type": 2 00:10:45.282 } 00:10:45.282 ], 00:10:45.282 "driver_specific": { 00:10:45.282 "raid": { 00:10:45.282 "uuid": "a66e9a7b-18f1-4ee9-841a-b65f5ea88643", 00:10:45.282 "strip_size_kb": 64, 00:10:45.282 "state": "online", 00:10:45.282 "raid_level": "concat", 00:10:45.282 "superblock": true, 00:10:45.282 "num_base_bdevs": 4, 00:10:45.282 "num_base_bdevs_discovered": 4, 00:10:45.282 "num_base_bdevs_operational": 4, 00:10:45.282 "base_bdevs_list": [ 00:10:45.282 { 00:10:45.282 "name": "pt1", 00:10:45.282 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:45.282 "is_configured": true, 00:10:45.282 "data_offset": 2048, 00:10:45.282 "data_size": 63488 00:10:45.282 }, 00:10:45.282 { 00:10:45.282 "name": "pt2", 00:10:45.282 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:45.282 "is_configured": true, 00:10:45.282 "data_offset": 2048, 00:10:45.282 "data_size": 63488 00:10:45.282 }, 00:10:45.282 { 00:10:45.282 "name": "pt3", 00:10:45.282 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:45.282 "is_configured": true, 00:10:45.282 "data_offset": 2048, 00:10:45.282 "data_size": 63488 00:10:45.282 }, 00:10:45.282 { 00:10:45.282 "name": "pt4", 00:10:45.282 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:45.282 "is_configured": true, 00:10:45.282 "data_offset": 2048, 00:10:45.282 "data_size": 63488 00:10:45.282 } 00:10:45.282 ] 00:10:45.282 } 00:10:45.282 } 00:10:45.282 }' 00:10:45.282 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:45.542 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:45.542 pt2 00:10:45.542 pt3 00:10:45.542 pt4' 00:10:45.542 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.542 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:45.542 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.542 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:45.542 03:10:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.542 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.542 03:10:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.542 03:10:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.542 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.542 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.542 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.542 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:45.542 03:10:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.542 03:10:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.542 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.542 03:10:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.543 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.543 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.543 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.543 03:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:45.543 03:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.543 03:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.543 03:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.543 03:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.543 03:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.543 03:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.543 03:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.543 03:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:45.543 03:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.543 03:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.543 03:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.543 03:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.543 03:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.543 03:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.543 03:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:45.543 03:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:45.543 03:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.543 03:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.543 [2024-11-18 03:10:49.103270] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:45.812 03:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.812 03:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a66e9a7b-18f1-4ee9-841a-b65f5ea88643 '!=' a66e9a7b-18f1-4ee9-841a-b65f5ea88643 ']' 00:10:45.812 03:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:45.812 03:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:45.812 03:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:45.812 03:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83624 00:10:45.812 03:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 83624 ']' 00:10:45.812 03:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 83624 00:10:45.812 03:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:45.812 03:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:45.812 03:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83624 00:10:45.812 killing process with pid 83624 00:10:45.812 03:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:45.812 03:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:45.812 03:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83624' 00:10:45.812 03:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 83624 00:10:45.812 [2024-11-18 03:10:49.160954] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:45.812 [2024-11-18 03:10:49.161079] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:45.812 [2024-11-18 03:10:49.161150] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:45.812 03:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 83624 00:10:45.812 [2024-11-18 03:10:49.161162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:45.812 [2024-11-18 03:10:49.206419] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:46.073 03:10:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:46.073 00:10:46.073 real 0m4.258s 00:10:46.073 user 0m6.708s 00:10:46.073 sys 0m0.900s 00:10:46.073 03:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:46.073 ************************************ 00:10:46.073 END TEST raid_superblock_test 00:10:46.073 ************************************ 00:10:46.073 03:10:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.073 03:10:49 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:10:46.073 03:10:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:46.073 03:10:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:46.073 03:10:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:46.073 ************************************ 00:10:46.073 START TEST raid_read_error_test 00:10:46.073 ************************************ 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:46.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pHnrNOkvCz 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83873 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83873 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 83873 ']' 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.073 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:46.073 [2024-11-18 03:10:49.620369] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:46.073 [2024-11-18 03:10:49.620590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83873 ] 00:10:46.332 [2024-11-18 03:10:49.780898] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.333 [2024-11-18 03:10:49.831433] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.333 [2024-11-18 03:10:49.874274] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.333 [2024-11-18 03:10:49.874314] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.902 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:46.902 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:46.902 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:46.902 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:46.902 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.902 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.902 BaseBdev1_malloc 00:10:46.902 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.902 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:46.902 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.902 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.161 true 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.161 [2024-11-18 03:10:50.492837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:47.161 [2024-11-18 03:10:50.492946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.161 [2024-11-18 03:10:50.493045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:47.161 [2024-11-18 03:10:50.493085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.161 [2024-11-18 03:10:50.495247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.161 [2024-11-18 03:10:50.495336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:47.161 BaseBdev1 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.161 BaseBdev2_malloc 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.161 true 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.161 [2024-11-18 03:10:50.541327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:47.161 [2024-11-18 03:10:50.541423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.161 [2024-11-18 03:10:50.541462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:47.161 [2024-11-18 03:10:50.541471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.161 [2024-11-18 03:10:50.543518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.161 [2024-11-18 03:10:50.543556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:47.161 BaseBdev2 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.161 BaseBdev3_malloc 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.161 true 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.161 [2024-11-18 03:10:50.581940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:47.161 [2024-11-18 03:10:50.582000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.161 [2024-11-18 03:10:50.582021] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:47.161 [2024-11-18 03:10:50.582031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.161 [2024-11-18 03:10:50.584329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.161 [2024-11-18 03:10:50.584415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:47.161 BaseBdev3 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.161 BaseBdev4_malloc 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.161 true 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.161 [2024-11-18 03:10:50.622836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:47.161 [2024-11-18 03:10:50.622940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.161 [2024-11-18 03:10:50.623001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:47.161 [2024-11-18 03:10:50.623013] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.161 [2024-11-18 03:10:50.625327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.161 [2024-11-18 03:10:50.625369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:47.161 BaseBdev4 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:47.161 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.162 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.162 [2024-11-18 03:10:50.634873] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:47.162 [2024-11-18 03:10:50.636850] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:47.162 [2024-11-18 03:10:50.636952] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:47.162 [2024-11-18 03:10:50.637030] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:47.162 [2024-11-18 03:10:50.637265] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:47.162 [2024-11-18 03:10:50.637284] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:47.162 [2024-11-18 03:10:50.637589] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:47.162 [2024-11-18 03:10:50.637747] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:47.162 [2024-11-18 03:10:50.637761] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:47.162 [2024-11-18 03:10:50.637911] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.162 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.162 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:47.162 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:47.162 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.162 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.162 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.162 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.162 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.162 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.162 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.162 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.162 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.162 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.162 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.162 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.162 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.162 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.162 "name": "raid_bdev1", 00:10:47.162 "uuid": "92b1b047-8c25-42a4-ad7d-3d188141ce40", 00:10:47.162 "strip_size_kb": 64, 00:10:47.162 "state": "online", 00:10:47.162 "raid_level": "concat", 00:10:47.162 "superblock": true, 00:10:47.162 "num_base_bdevs": 4, 00:10:47.162 "num_base_bdevs_discovered": 4, 00:10:47.162 "num_base_bdevs_operational": 4, 00:10:47.162 "base_bdevs_list": [ 00:10:47.162 { 00:10:47.162 "name": "BaseBdev1", 00:10:47.162 "uuid": "d8eed558-6e20-566a-a989-bff72868c06d", 00:10:47.162 "is_configured": true, 00:10:47.162 "data_offset": 2048, 00:10:47.162 "data_size": 63488 00:10:47.162 }, 00:10:47.162 { 00:10:47.162 "name": "BaseBdev2", 00:10:47.162 "uuid": "91fbc26c-6576-5309-8759-431ecaadaebd", 00:10:47.162 "is_configured": true, 00:10:47.162 "data_offset": 2048, 00:10:47.162 "data_size": 63488 00:10:47.162 }, 00:10:47.162 { 00:10:47.162 "name": "BaseBdev3", 00:10:47.162 "uuid": "6382f7b7-df34-57b0-9265-e86b4437ea7f", 00:10:47.162 "is_configured": true, 00:10:47.162 "data_offset": 2048, 00:10:47.162 "data_size": 63488 00:10:47.162 }, 00:10:47.162 { 00:10:47.162 "name": "BaseBdev4", 00:10:47.162 "uuid": "6906d780-e4ee-5b60-8fda-d4fa3e03958f", 00:10:47.162 "is_configured": true, 00:10:47.162 "data_offset": 2048, 00:10:47.162 "data_size": 63488 00:10:47.162 } 00:10:47.162 ] 00:10:47.162 }' 00:10:47.162 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.162 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.732 03:10:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:47.732 03:10:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:47.732 [2024-11-18 03:10:51.150333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:48.672 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:48.672 03:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.672 03:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.672 03:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.672 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:48.672 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:48.672 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:48.672 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:48.672 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.672 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.672 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.672 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.672 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.672 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.672 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.672 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.672 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.672 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.672 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.672 03:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.672 03:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.672 03:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.672 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.672 "name": "raid_bdev1", 00:10:48.672 "uuid": "92b1b047-8c25-42a4-ad7d-3d188141ce40", 00:10:48.672 "strip_size_kb": 64, 00:10:48.672 "state": "online", 00:10:48.672 "raid_level": "concat", 00:10:48.672 "superblock": true, 00:10:48.672 "num_base_bdevs": 4, 00:10:48.672 "num_base_bdevs_discovered": 4, 00:10:48.672 "num_base_bdevs_operational": 4, 00:10:48.672 "base_bdevs_list": [ 00:10:48.672 { 00:10:48.672 "name": "BaseBdev1", 00:10:48.672 "uuid": "d8eed558-6e20-566a-a989-bff72868c06d", 00:10:48.672 "is_configured": true, 00:10:48.672 "data_offset": 2048, 00:10:48.672 "data_size": 63488 00:10:48.672 }, 00:10:48.672 { 00:10:48.672 "name": "BaseBdev2", 00:10:48.672 "uuid": "91fbc26c-6576-5309-8759-431ecaadaebd", 00:10:48.672 "is_configured": true, 00:10:48.672 "data_offset": 2048, 00:10:48.672 "data_size": 63488 00:10:48.672 }, 00:10:48.672 { 00:10:48.672 "name": "BaseBdev3", 00:10:48.672 "uuid": "6382f7b7-df34-57b0-9265-e86b4437ea7f", 00:10:48.672 "is_configured": true, 00:10:48.672 "data_offset": 2048, 00:10:48.672 "data_size": 63488 00:10:48.672 }, 00:10:48.672 { 00:10:48.672 "name": "BaseBdev4", 00:10:48.672 "uuid": "6906d780-e4ee-5b60-8fda-d4fa3e03958f", 00:10:48.672 "is_configured": true, 00:10:48.672 "data_offset": 2048, 00:10:48.672 "data_size": 63488 00:10:48.672 } 00:10:48.672 ] 00:10:48.672 }' 00:10:48.672 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.672 03:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.241 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:49.241 03:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.241 03:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.241 [2024-11-18 03:10:52.574824] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:49.241 [2024-11-18 03:10:52.574929] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.241 [2024-11-18 03:10:52.577515] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.241 [2024-11-18 03:10:52.577611] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.241 [2024-11-18 03:10:52.577678] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:49.241 [2024-11-18 03:10:52.577731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:49.241 { 00:10:49.241 "results": [ 00:10:49.242 { 00:10:49.242 "job": "raid_bdev1", 00:10:49.242 "core_mask": "0x1", 00:10:49.242 "workload": "randrw", 00:10:49.242 "percentage": 50, 00:10:49.242 "status": "finished", 00:10:49.242 "queue_depth": 1, 00:10:49.242 "io_size": 131072, 00:10:49.242 "runtime": 1.425499, 00:10:49.242 "iops": 16073.669641297538, 00:10:49.242 "mibps": 2009.2087051621922, 00:10:49.242 "io_failed": 1, 00:10:49.242 "io_timeout": 0, 00:10:49.242 "avg_latency_us": 86.42650716386656, 00:10:49.242 "min_latency_us": 26.717903930131005, 00:10:49.242 "max_latency_us": 2532.7231441048034 00:10:49.242 } 00:10:49.242 ], 00:10:49.242 "core_count": 1 00:10:49.242 } 00:10:49.242 03:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.242 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83873 00:10:49.242 03:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 83873 ']' 00:10:49.242 03:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 83873 00:10:49.242 03:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:49.242 03:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:49.242 03:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83873 00:10:49.242 03:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:49.242 03:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:49.242 03:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83873' 00:10:49.242 killing process with pid 83873 00:10:49.242 03:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 83873 00:10:49.242 03:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 83873 00:10:49.242 [2024-11-18 03:10:52.616276] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:49.242 [2024-11-18 03:10:52.652811] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:49.502 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pHnrNOkvCz 00:10:49.502 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:49.502 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:49.502 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:10:49.502 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:49.502 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:49.502 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:49.502 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:10:49.502 00:10:49.502 real 0m3.371s 00:10:49.502 user 0m4.279s 00:10:49.502 sys 0m0.521s 00:10:49.502 03:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:49.502 ************************************ 00:10:49.502 END TEST raid_read_error_test 00:10:49.502 ************************************ 00:10:49.502 03:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.502 03:10:52 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:10:49.502 03:10:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:49.502 03:10:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:49.502 03:10:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:49.502 ************************************ 00:10:49.502 START TEST raid_write_error_test 00:10:49.502 ************************************ 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2FdTo6Uudd 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=84003 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 84003 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 84003 ']' 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:49.502 03:10:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.502 [2024-11-18 03:10:53.049494] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:49.502 [2024-11-18 03:10:53.049640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84003 ] 00:10:49.762 [2024-11-18 03:10:53.210750] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.762 [2024-11-18 03:10:53.261827] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.762 [2024-11-18 03:10:53.304539] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.762 [2024-11-18 03:10:53.304581] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:50.700 03:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:50.700 03:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:50.700 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:50.700 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:50.700 03:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.700 03:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.700 BaseBdev1_malloc 00:10:50.700 03:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.700 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:50.700 03:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.700 03:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.700 true 00:10:50.700 03:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.700 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:50.700 03:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.700 03:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.700 [2024-11-18 03:10:53.959176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:50.700 [2024-11-18 03:10:53.959234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.701 [2024-11-18 03:10:53.959282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:50.701 [2024-11-18 03:10:53.959291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.701 [2024-11-18 03:10:53.961496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.701 [2024-11-18 03:10:53.961579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:50.701 BaseBdev1 00:10:50.701 03:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.701 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:50.701 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:50.701 03:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.701 03:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.701 BaseBdev2_malloc 00:10:50.701 03:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.701 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:50.701 03:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.701 03:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.701 true 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.701 [2024-11-18 03:10:54.008994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:50.701 [2024-11-18 03:10:54.009047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.701 [2024-11-18 03:10:54.009082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:50.701 [2024-11-18 03:10:54.009091] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.701 [2024-11-18 03:10:54.011213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.701 [2024-11-18 03:10:54.011251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:50.701 BaseBdev2 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.701 BaseBdev3_malloc 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.701 true 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.701 [2024-11-18 03:10:54.049782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:50.701 [2024-11-18 03:10:54.049843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.701 [2024-11-18 03:10:54.049865] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:50.701 [2024-11-18 03:10:54.049874] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.701 [2024-11-18 03:10:54.052015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.701 [2024-11-18 03:10:54.052054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:50.701 BaseBdev3 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.701 BaseBdev4_malloc 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.701 true 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.701 [2024-11-18 03:10:54.090664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:50.701 [2024-11-18 03:10:54.090717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.701 [2024-11-18 03:10:54.090740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:50.701 [2024-11-18 03:10:54.090750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.701 [2024-11-18 03:10:54.092847] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.701 [2024-11-18 03:10:54.092886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:50.701 BaseBdev4 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.701 [2024-11-18 03:10:54.102718] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:50.701 [2024-11-18 03:10:54.104635] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:50.701 [2024-11-18 03:10:54.104722] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:50.701 [2024-11-18 03:10:54.104776] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:50.701 [2024-11-18 03:10:54.104992] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:50.701 [2024-11-18 03:10:54.105021] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:50.701 [2024-11-18 03:10:54.105280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:50.701 [2024-11-18 03:10:54.105424] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:50.701 [2024-11-18 03:10:54.105442] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:50.701 [2024-11-18 03:10:54.105598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.701 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.701 "name": "raid_bdev1", 00:10:50.701 "uuid": "7b8b7cd5-e472-4d8b-8366-b8c5b8ed10e2", 00:10:50.701 "strip_size_kb": 64, 00:10:50.701 "state": "online", 00:10:50.701 "raid_level": "concat", 00:10:50.701 "superblock": true, 00:10:50.701 "num_base_bdevs": 4, 00:10:50.701 "num_base_bdevs_discovered": 4, 00:10:50.701 "num_base_bdevs_operational": 4, 00:10:50.701 "base_bdevs_list": [ 00:10:50.701 { 00:10:50.701 "name": "BaseBdev1", 00:10:50.701 "uuid": "e139ee5f-57c6-5964-b541-3e04f39bfc33", 00:10:50.701 "is_configured": true, 00:10:50.701 "data_offset": 2048, 00:10:50.701 "data_size": 63488 00:10:50.701 }, 00:10:50.701 { 00:10:50.701 "name": "BaseBdev2", 00:10:50.701 "uuid": "514acf4e-39d9-5227-a5a5-1959d1e629ed", 00:10:50.701 "is_configured": true, 00:10:50.701 "data_offset": 2048, 00:10:50.701 "data_size": 63488 00:10:50.701 }, 00:10:50.701 { 00:10:50.702 "name": "BaseBdev3", 00:10:50.702 "uuid": "0ec562a5-dada-5271-a629-3d5734203a48", 00:10:50.702 "is_configured": true, 00:10:50.702 "data_offset": 2048, 00:10:50.702 "data_size": 63488 00:10:50.702 }, 00:10:50.702 { 00:10:50.702 "name": "BaseBdev4", 00:10:50.702 "uuid": "8d188ed2-f283-5a95-befe-7d433f2bcf73", 00:10:50.702 "is_configured": true, 00:10:50.702 "data_offset": 2048, 00:10:50.702 "data_size": 63488 00:10:50.702 } 00:10:50.702 ] 00:10:50.702 }' 00:10:50.702 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.702 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.307 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:51.307 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:51.307 [2024-11-18 03:10:54.654161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:52.245 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:52.245 03:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.245 03:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.245 03:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.245 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:52.245 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:52.245 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:52.245 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:52.245 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.245 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.245 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.245 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.245 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.245 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.245 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.245 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.245 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.245 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.245 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.245 03:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.245 03:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.245 03:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.245 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.245 "name": "raid_bdev1", 00:10:52.245 "uuid": "7b8b7cd5-e472-4d8b-8366-b8c5b8ed10e2", 00:10:52.245 "strip_size_kb": 64, 00:10:52.245 "state": "online", 00:10:52.245 "raid_level": "concat", 00:10:52.245 "superblock": true, 00:10:52.245 "num_base_bdevs": 4, 00:10:52.245 "num_base_bdevs_discovered": 4, 00:10:52.245 "num_base_bdevs_operational": 4, 00:10:52.245 "base_bdevs_list": [ 00:10:52.245 { 00:10:52.245 "name": "BaseBdev1", 00:10:52.245 "uuid": "e139ee5f-57c6-5964-b541-3e04f39bfc33", 00:10:52.245 "is_configured": true, 00:10:52.245 "data_offset": 2048, 00:10:52.245 "data_size": 63488 00:10:52.245 }, 00:10:52.245 { 00:10:52.245 "name": "BaseBdev2", 00:10:52.245 "uuid": "514acf4e-39d9-5227-a5a5-1959d1e629ed", 00:10:52.245 "is_configured": true, 00:10:52.245 "data_offset": 2048, 00:10:52.245 "data_size": 63488 00:10:52.245 }, 00:10:52.245 { 00:10:52.245 "name": "BaseBdev3", 00:10:52.245 "uuid": "0ec562a5-dada-5271-a629-3d5734203a48", 00:10:52.245 "is_configured": true, 00:10:52.245 "data_offset": 2048, 00:10:52.245 "data_size": 63488 00:10:52.245 }, 00:10:52.245 { 00:10:52.245 "name": "BaseBdev4", 00:10:52.245 "uuid": "8d188ed2-f283-5a95-befe-7d433f2bcf73", 00:10:52.245 "is_configured": true, 00:10:52.245 "data_offset": 2048, 00:10:52.245 "data_size": 63488 00:10:52.245 } 00:10:52.245 ] 00:10:52.245 }' 00:10:52.245 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.245 03:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.505 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:52.506 03:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.506 03:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.506 [2024-11-18 03:10:55.985695] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:52.506 [2024-11-18 03:10:55.985727] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:52.506 { 00:10:52.506 "results": [ 00:10:52.506 { 00:10:52.506 "job": "raid_bdev1", 00:10:52.506 "core_mask": "0x1", 00:10:52.506 "workload": "randrw", 00:10:52.506 "percentage": 50, 00:10:52.506 "status": "finished", 00:10:52.506 "queue_depth": 1, 00:10:52.506 "io_size": 131072, 00:10:52.506 "runtime": 1.332126, 00:10:52.506 "iops": 15854.356119466176, 00:10:52.506 "mibps": 1981.794514933272, 00:10:52.506 "io_failed": 1, 00:10:52.506 "io_timeout": 0, 00:10:52.506 "avg_latency_us": 87.52964943725166, 00:10:52.506 "min_latency_us": 26.606113537117903, 00:10:52.506 "max_latency_us": 1452.380786026201 00:10:52.506 } 00:10:52.506 ], 00:10:52.506 "core_count": 1 00:10:52.506 } 00:10:52.506 [2024-11-18 03:10:55.988302] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:52.506 [2024-11-18 03:10:55.988355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.506 [2024-11-18 03:10:55.988402] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:52.506 [2024-11-18 03:10:55.988412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:52.506 03:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.506 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 84003 00:10:52.506 03:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 84003 ']' 00:10:52.506 03:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 84003 00:10:52.506 03:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:52.506 03:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:52.506 03:10:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84003 00:10:52.506 03:10:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:52.506 03:10:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:52.506 03:10:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84003' 00:10:52.506 killing process with pid 84003 00:10:52.506 03:10:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 84003 00:10:52.506 [2024-11-18 03:10:56.041592] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:52.506 03:10:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 84003 00:10:52.506 [2024-11-18 03:10:56.078824] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:52.765 03:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:52.765 03:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2FdTo6Uudd 00:10:52.765 03:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:52.765 03:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:10:52.766 03:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:52.766 ************************************ 00:10:52.766 END TEST raid_write_error_test 00:10:52.766 ************************************ 00:10:52.766 03:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:52.766 03:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:52.766 03:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:10:52.766 00:10:52.766 real 0m3.376s 00:10:52.766 user 0m4.252s 00:10:52.766 sys 0m0.553s 00:10:52.766 03:10:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:52.766 03:10:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.026 03:10:56 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:53.026 03:10:56 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:10:53.026 03:10:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:53.026 03:10:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:53.026 03:10:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:53.026 ************************************ 00:10:53.026 START TEST raid_state_function_test 00:10:53.026 ************************************ 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=84130 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84130' 00:10:53.026 Process raid pid: 84130 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 84130 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 84130 ']' 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:53.026 03:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.026 [2024-11-18 03:10:56.496452] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:53.026 [2024-11-18 03:10:56.496602] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.286 [2024-11-18 03:10:56.659446] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.286 [2024-11-18 03:10:56.710083] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.286 [2024-11-18 03:10:56.752738] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.286 [2024-11-18 03:10:56.752775] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.856 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:53.856 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:53.856 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:53.856 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.857 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.857 [2024-11-18 03:10:57.338397] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:53.857 [2024-11-18 03:10:57.338461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:53.857 [2024-11-18 03:10:57.338474] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:53.857 [2024-11-18 03:10:57.338484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:53.857 [2024-11-18 03:10:57.338493] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:53.857 [2024-11-18 03:10:57.338506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:53.857 [2024-11-18 03:10:57.338512] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:53.857 [2024-11-18 03:10:57.338521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:53.857 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.857 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:53.857 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.857 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.857 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.857 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.857 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.857 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.857 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.857 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.857 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.857 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.857 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.857 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.857 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.857 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.857 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.857 "name": "Existed_Raid", 00:10:53.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.857 "strip_size_kb": 0, 00:10:53.857 "state": "configuring", 00:10:53.857 "raid_level": "raid1", 00:10:53.857 "superblock": false, 00:10:53.857 "num_base_bdevs": 4, 00:10:53.857 "num_base_bdevs_discovered": 0, 00:10:53.857 "num_base_bdevs_operational": 4, 00:10:53.857 "base_bdevs_list": [ 00:10:53.857 { 00:10:53.857 "name": "BaseBdev1", 00:10:53.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.857 "is_configured": false, 00:10:53.857 "data_offset": 0, 00:10:53.857 "data_size": 0 00:10:53.857 }, 00:10:53.857 { 00:10:53.857 "name": "BaseBdev2", 00:10:53.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.857 "is_configured": false, 00:10:53.857 "data_offset": 0, 00:10:53.857 "data_size": 0 00:10:53.857 }, 00:10:53.857 { 00:10:53.857 "name": "BaseBdev3", 00:10:53.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.857 "is_configured": false, 00:10:53.857 "data_offset": 0, 00:10:53.857 "data_size": 0 00:10:53.857 }, 00:10:53.857 { 00:10:53.857 "name": "BaseBdev4", 00:10:53.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.857 "is_configured": false, 00:10:53.857 "data_offset": 0, 00:10:53.857 "data_size": 0 00:10:53.857 } 00:10:53.857 ] 00:10:53.857 }' 00:10:53.857 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.857 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.427 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:54.427 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.427 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.427 [2024-11-18 03:10:57.761589] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:54.427 [2024-11-18 03:10:57.761690] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:54.427 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.427 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:54.427 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.427 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.427 [2024-11-18 03:10:57.769609] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:54.427 [2024-11-18 03:10:57.769690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:54.427 [2024-11-18 03:10:57.769718] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:54.427 [2024-11-18 03:10:57.769741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:54.427 [2024-11-18 03:10:57.769760] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:54.427 [2024-11-18 03:10:57.769781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:54.427 [2024-11-18 03:10:57.769799] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:54.427 [2024-11-18 03:10:57.769820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:54.427 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.427 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:54.427 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.427 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.427 [2024-11-18 03:10:57.786634] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:54.427 BaseBdev1 00:10:54.427 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.427 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:54.427 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:54.427 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:54.427 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:54.427 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:54.427 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:54.427 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:54.427 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.427 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.428 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.428 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:54.428 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.428 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.428 [ 00:10:54.428 { 00:10:54.428 "name": "BaseBdev1", 00:10:54.428 "aliases": [ 00:10:54.428 "ec99d276-4d53-4633-be3c-06a7b9175598" 00:10:54.428 ], 00:10:54.428 "product_name": "Malloc disk", 00:10:54.428 "block_size": 512, 00:10:54.428 "num_blocks": 65536, 00:10:54.428 "uuid": "ec99d276-4d53-4633-be3c-06a7b9175598", 00:10:54.428 "assigned_rate_limits": { 00:10:54.428 "rw_ios_per_sec": 0, 00:10:54.428 "rw_mbytes_per_sec": 0, 00:10:54.428 "r_mbytes_per_sec": 0, 00:10:54.428 "w_mbytes_per_sec": 0 00:10:54.428 }, 00:10:54.428 "claimed": true, 00:10:54.428 "claim_type": "exclusive_write", 00:10:54.428 "zoned": false, 00:10:54.428 "supported_io_types": { 00:10:54.428 "read": true, 00:10:54.428 "write": true, 00:10:54.428 "unmap": true, 00:10:54.428 "flush": true, 00:10:54.428 "reset": true, 00:10:54.428 "nvme_admin": false, 00:10:54.428 "nvme_io": false, 00:10:54.428 "nvme_io_md": false, 00:10:54.428 "write_zeroes": true, 00:10:54.428 "zcopy": true, 00:10:54.428 "get_zone_info": false, 00:10:54.428 "zone_management": false, 00:10:54.428 "zone_append": false, 00:10:54.428 "compare": false, 00:10:54.428 "compare_and_write": false, 00:10:54.428 "abort": true, 00:10:54.428 "seek_hole": false, 00:10:54.428 "seek_data": false, 00:10:54.428 "copy": true, 00:10:54.428 "nvme_iov_md": false 00:10:54.428 }, 00:10:54.428 "memory_domains": [ 00:10:54.428 { 00:10:54.428 "dma_device_id": "system", 00:10:54.428 "dma_device_type": 1 00:10:54.428 }, 00:10:54.428 { 00:10:54.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.428 "dma_device_type": 2 00:10:54.428 } 00:10:54.428 ], 00:10:54.428 "driver_specific": {} 00:10:54.428 } 00:10:54.428 ] 00:10:54.428 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.428 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:54.428 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:54.428 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.428 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.428 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.428 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.428 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.428 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.428 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.428 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.428 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.428 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.428 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.428 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.428 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.428 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.428 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.428 "name": "Existed_Raid", 00:10:54.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.428 "strip_size_kb": 0, 00:10:54.428 "state": "configuring", 00:10:54.428 "raid_level": "raid1", 00:10:54.428 "superblock": false, 00:10:54.428 "num_base_bdevs": 4, 00:10:54.428 "num_base_bdevs_discovered": 1, 00:10:54.428 "num_base_bdevs_operational": 4, 00:10:54.428 "base_bdevs_list": [ 00:10:54.428 { 00:10:54.428 "name": "BaseBdev1", 00:10:54.428 "uuid": "ec99d276-4d53-4633-be3c-06a7b9175598", 00:10:54.428 "is_configured": true, 00:10:54.428 "data_offset": 0, 00:10:54.428 "data_size": 65536 00:10:54.428 }, 00:10:54.428 { 00:10:54.428 "name": "BaseBdev2", 00:10:54.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.428 "is_configured": false, 00:10:54.428 "data_offset": 0, 00:10:54.428 "data_size": 0 00:10:54.428 }, 00:10:54.428 { 00:10:54.428 "name": "BaseBdev3", 00:10:54.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.428 "is_configured": false, 00:10:54.428 "data_offset": 0, 00:10:54.428 "data_size": 0 00:10:54.428 }, 00:10:54.428 { 00:10:54.428 "name": "BaseBdev4", 00:10:54.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.428 "is_configured": false, 00:10:54.428 "data_offset": 0, 00:10:54.428 "data_size": 0 00:10:54.428 } 00:10:54.428 ] 00:10:54.428 }' 00:10:54.428 03:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.428 03:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.688 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:54.688 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.688 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.688 [2024-11-18 03:10:58.249921] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:54.688 [2024-11-18 03:10:58.250054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:54.688 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.688 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:54.688 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.688 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.688 [2024-11-18 03:10:58.261942] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:54.948 [2024-11-18 03:10:58.264134] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:54.948 [2024-11-18 03:10:58.264184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:54.948 [2024-11-18 03:10:58.264196] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:54.948 [2024-11-18 03:10:58.264207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:54.948 [2024-11-18 03:10:58.264214] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:54.948 [2024-11-18 03:10:58.264234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:54.948 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.948 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:54.948 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:54.948 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:54.948 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.948 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.948 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.948 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.948 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.948 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.948 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.948 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.948 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.948 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.948 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.949 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.949 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.949 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.949 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.949 "name": "Existed_Raid", 00:10:54.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.949 "strip_size_kb": 0, 00:10:54.949 "state": "configuring", 00:10:54.949 "raid_level": "raid1", 00:10:54.949 "superblock": false, 00:10:54.949 "num_base_bdevs": 4, 00:10:54.949 "num_base_bdevs_discovered": 1, 00:10:54.949 "num_base_bdevs_operational": 4, 00:10:54.949 "base_bdevs_list": [ 00:10:54.949 { 00:10:54.949 "name": "BaseBdev1", 00:10:54.949 "uuid": "ec99d276-4d53-4633-be3c-06a7b9175598", 00:10:54.949 "is_configured": true, 00:10:54.949 "data_offset": 0, 00:10:54.949 "data_size": 65536 00:10:54.949 }, 00:10:54.949 { 00:10:54.949 "name": "BaseBdev2", 00:10:54.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.949 "is_configured": false, 00:10:54.949 "data_offset": 0, 00:10:54.949 "data_size": 0 00:10:54.949 }, 00:10:54.949 { 00:10:54.949 "name": "BaseBdev3", 00:10:54.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.949 "is_configured": false, 00:10:54.949 "data_offset": 0, 00:10:54.949 "data_size": 0 00:10:54.949 }, 00:10:54.949 { 00:10:54.949 "name": "BaseBdev4", 00:10:54.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.949 "is_configured": false, 00:10:54.949 "data_offset": 0, 00:10:54.949 "data_size": 0 00:10:54.949 } 00:10:54.949 ] 00:10:54.949 }' 00:10:54.949 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.949 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.208 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:55.208 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.208 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.208 [2024-11-18 03:10:58.745708] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:55.208 BaseBdev2 00:10:55.208 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.208 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:55.208 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:55.209 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:55.209 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:55.209 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:55.209 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:55.209 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:55.209 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.209 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.209 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.209 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:55.209 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.209 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.209 [ 00:10:55.209 { 00:10:55.209 "name": "BaseBdev2", 00:10:55.209 "aliases": [ 00:10:55.209 "8c647b2b-e161-440a-873c-32c0c73c7456" 00:10:55.209 ], 00:10:55.209 "product_name": "Malloc disk", 00:10:55.209 "block_size": 512, 00:10:55.209 "num_blocks": 65536, 00:10:55.209 "uuid": "8c647b2b-e161-440a-873c-32c0c73c7456", 00:10:55.209 "assigned_rate_limits": { 00:10:55.209 "rw_ios_per_sec": 0, 00:10:55.209 "rw_mbytes_per_sec": 0, 00:10:55.209 "r_mbytes_per_sec": 0, 00:10:55.209 "w_mbytes_per_sec": 0 00:10:55.209 }, 00:10:55.209 "claimed": true, 00:10:55.209 "claim_type": "exclusive_write", 00:10:55.209 "zoned": false, 00:10:55.209 "supported_io_types": { 00:10:55.209 "read": true, 00:10:55.209 "write": true, 00:10:55.209 "unmap": true, 00:10:55.209 "flush": true, 00:10:55.209 "reset": true, 00:10:55.209 "nvme_admin": false, 00:10:55.209 "nvme_io": false, 00:10:55.209 "nvme_io_md": false, 00:10:55.209 "write_zeroes": true, 00:10:55.209 "zcopy": true, 00:10:55.209 "get_zone_info": false, 00:10:55.209 "zone_management": false, 00:10:55.209 "zone_append": false, 00:10:55.209 "compare": false, 00:10:55.209 "compare_and_write": false, 00:10:55.209 "abort": true, 00:10:55.209 "seek_hole": false, 00:10:55.209 "seek_data": false, 00:10:55.209 "copy": true, 00:10:55.209 "nvme_iov_md": false 00:10:55.209 }, 00:10:55.209 "memory_domains": [ 00:10:55.209 { 00:10:55.209 "dma_device_id": "system", 00:10:55.209 "dma_device_type": 1 00:10:55.209 }, 00:10:55.209 { 00:10:55.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.209 "dma_device_type": 2 00:10:55.209 } 00:10:55.209 ], 00:10:55.209 "driver_specific": {} 00:10:55.209 } 00:10:55.209 ] 00:10:55.209 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.209 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:55.209 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:55.209 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:55.209 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:55.209 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.209 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.209 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.209 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.209 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.209 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.209 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.209 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.209 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.469 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.469 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.469 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.469 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.469 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.469 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.469 "name": "Existed_Raid", 00:10:55.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.469 "strip_size_kb": 0, 00:10:55.469 "state": "configuring", 00:10:55.469 "raid_level": "raid1", 00:10:55.469 "superblock": false, 00:10:55.469 "num_base_bdevs": 4, 00:10:55.469 "num_base_bdevs_discovered": 2, 00:10:55.469 "num_base_bdevs_operational": 4, 00:10:55.469 "base_bdevs_list": [ 00:10:55.469 { 00:10:55.469 "name": "BaseBdev1", 00:10:55.469 "uuid": "ec99d276-4d53-4633-be3c-06a7b9175598", 00:10:55.469 "is_configured": true, 00:10:55.469 "data_offset": 0, 00:10:55.469 "data_size": 65536 00:10:55.469 }, 00:10:55.469 { 00:10:55.469 "name": "BaseBdev2", 00:10:55.469 "uuid": "8c647b2b-e161-440a-873c-32c0c73c7456", 00:10:55.469 "is_configured": true, 00:10:55.469 "data_offset": 0, 00:10:55.469 "data_size": 65536 00:10:55.469 }, 00:10:55.469 { 00:10:55.469 "name": "BaseBdev3", 00:10:55.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.469 "is_configured": false, 00:10:55.469 "data_offset": 0, 00:10:55.469 "data_size": 0 00:10:55.469 }, 00:10:55.469 { 00:10:55.469 "name": "BaseBdev4", 00:10:55.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.469 "is_configured": false, 00:10:55.469 "data_offset": 0, 00:10:55.469 "data_size": 0 00:10:55.469 } 00:10:55.469 ] 00:10:55.469 }' 00:10:55.469 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.469 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.729 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:55.729 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.729 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.729 [2024-11-18 03:10:59.172172] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:55.729 BaseBdev3 00:10:55.729 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.729 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:55.729 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:55.729 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:55.729 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:55.729 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:55.729 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:55.729 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:55.729 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.729 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.729 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.729 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:55.729 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.729 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.729 [ 00:10:55.729 { 00:10:55.729 "name": "BaseBdev3", 00:10:55.729 "aliases": [ 00:10:55.729 "2632d3c4-a850-4752-92fa-8dc6f3114378" 00:10:55.729 ], 00:10:55.729 "product_name": "Malloc disk", 00:10:55.729 "block_size": 512, 00:10:55.729 "num_blocks": 65536, 00:10:55.729 "uuid": "2632d3c4-a850-4752-92fa-8dc6f3114378", 00:10:55.729 "assigned_rate_limits": { 00:10:55.729 "rw_ios_per_sec": 0, 00:10:55.729 "rw_mbytes_per_sec": 0, 00:10:55.729 "r_mbytes_per_sec": 0, 00:10:55.729 "w_mbytes_per_sec": 0 00:10:55.730 }, 00:10:55.730 "claimed": true, 00:10:55.730 "claim_type": "exclusive_write", 00:10:55.730 "zoned": false, 00:10:55.730 "supported_io_types": { 00:10:55.730 "read": true, 00:10:55.730 "write": true, 00:10:55.730 "unmap": true, 00:10:55.730 "flush": true, 00:10:55.730 "reset": true, 00:10:55.730 "nvme_admin": false, 00:10:55.730 "nvme_io": false, 00:10:55.730 "nvme_io_md": false, 00:10:55.730 "write_zeroes": true, 00:10:55.730 "zcopy": true, 00:10:55.730 "get_zone_info": false, 00:10:55.730 "zone_management": false, 00:10:55.730 "zone_append": false, 00:10:55.730 "compare": false, 00:10:55.730 "compare_and_write": false, 00:10:55.730 "abort": true, 00:10:55.730 "seek_hole": false, 00:10:55.730 "seek_data": false, 00:10:55.730 "copy": true, 00:10:55.730 "nvme_iov_md": false 00:10:55.730 }, 00:10:55.730 "memory_domains": [ 00:10:55.730 { 00:10:55.730 "dma_device_id": "system", 00:10:55.730 "dma_device_type": 1 00:10:55.730 }, 00:10:55.730 { 00:10:55.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.730 "dma_device_type": 2 00:10:55.730 } 00:10:55.730 ], 00:10:55.730 "driver_specific": {} 00:10:55.730 } 00:10:55.730 ] 00:10:55.730 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.730 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:55.730 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:55.730 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:55.730 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:55.730 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.730 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.730 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.730 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.730 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.730 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.730 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.730 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.730 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.730 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.730 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.730 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.730 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.730 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.730 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.730 "name": "Existed_Raid", 00:10:55.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.730 "strip_size_kb": 0, 00:10:55.730 "state": "configuring", 00:10:55.730 "raid_level": "raid1", 00:10:55.730 "superblock": false, 00:10:55.730 "num_base_bdevs": 4, 00:10:55.730 "num_base_bdevs_discovered": 3, 00:10:55.730 "num_base_bdevs_operational": 4, 00:10:55.730 "base_bdevs_list": [ 00:10:55.730 { 00:10:55.730 "name": "BaseBdev1", 00:10:55.730 "uuid": "ec99d276-4d53-4633-be3c-06a7b9175598", 00:10:55.730 "is_configured": true, 00:10:55.730 "data_offset": 0, 00:10:55.730 "data_size": 65536 00:10:55.730 }, 00:10:55.730 { 00:10:55.730 "name": "BaseBdev2", 00:10:55.730 "uuid": "8c647b2b-e161-440a-873c-32c0c73c7456", 00:10:55.730 "is_configured": true, 00:10:55.730 "data_offset": 0, 00:10:55.730 "data_size": 65536 00:10:55.730 }, 00:10:55.730 { 00:10:55.730 "name": "BaseBdev3", 00:10:55.730 "uuid": "2632d3c4-a850-4752-92fa-8dc6f3114378", 00:10:55.730 "is_configured": true, 00:10:55.730 "data_offset": 0, 00:10:55.730 "data_size": 65536 00:10:55.730 }, 00:10:55.730 { 00:10:55.730 "name": "BaseBdev4", 00:10:55.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.730 "is_configured": false, 00:10:55.730 "data_offset": 0, 00:10:55.730 "data_size": 0 00:10:55.730 } 00:10:55.730 ] 00:10:55.730 }' 00:10:55.730 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.730 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.299 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:56.299 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.299 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.299 [2024-11-18 03:10:59.622644] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:56.299 [2024-11-18 03:10:59.622796] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:56.299 [2024-11-18 03:10:59.622825] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:56.299 [2024-11-18 03:10:59.623201] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:56.299 [2024-11-18 03:10:59.623410] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:56.299 [2024-11-18 03:10:59.623462] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:56.299 [2024-11-18 03:10:59.623731] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.299 BaseBdev4 00:10:56.299 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.299 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:56.299 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:56.299 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:56.299 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:56.299 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:56.299 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:56.299 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:56.299 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.299 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.299 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.299 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:56.299 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.299 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.299 [ 00:10:56.299 { 00:10:56.299 "name": "BaseBdev4", 00:10:56.299 "aliases": [ 00:10:56.299 "dde030e4-e203-4928-a7aa-64453d842ad3" 00:10:56.299 ], 00:10:56.299 "product_name": "Malloc disk", 00:10:56.299 "block_size": 512, 00:10:56.299 "num_blocks": 65536, 00:10:56.299 "uuid": "dde030e4-e203-4928-a7aa-64453d842ad3", 00:10:56.299 "assigned_rate_limits": { 00:10:56.299 "rw_ios_per_sec": 0, 00:10:56.299 "rw_mbytes_per_sec": 0, 00:10:56.299 "r_mbytes_per_sec": 0, 00:10:56.299 "w_mbytes_per_sec": 0 00:10:56.299 }, 00:10:56.299 "claimed": true, 00:10:56.299 "claim_type": "exclusive_write", 00:10:56.299 "zoned": false, 00:10:56.299 "supported_io_types": { 00:10:56.299 "read": true, 00:10:56.299 "write": true, 00:10:56.299 "unmap": true, 00:10:56.299 "flush": true, 00:10:56.299 "reset": true, 00:10:56.299 "nvme_admin": false, 00:10:56.299 "nvme_io": false, 00:10:56.299 "nvme_io_md": false, 00:10:56.299 "write_zeroes": true, 00:10:56.299 "zcopy": true, 00:10:56.299 "get_zone_info": false, 00:10:56.299 "zone_management": false, 00:10:56.299 "zone_append": false, 00:10:56.299 "compare": false, 00:10:56.299 "compare_and_write": false, 00:10:56.299 "abort": true, 00:10:56.300 "seek_hole": false, 00:10:56.300 "seek_data": false, 00:10:56.300 "copy": true, 00:10:56.300 "nvme_iov_md": false 00:10:56.300 }, 00:10:56.300 "memory_domains": [ 00:10:56.300 { 00:10:56.300 "dma_device_id": "system", 00:10:56.300 "dma_device_type": 1 00:10:56.300 }, 00:10:56.300 { 00:10:56.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.300 "dma_device_type": 2 00:10:56.300 } 00:10:56.300 ], 00:10:56.300 "driver_specific": {} 00:10:56.300 } 00:10:56.300 ] 00:10:56.300 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.300 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:56.300 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:56.300 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:56.300 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:56.300 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.300 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.300 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.300 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.300 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.300 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.300 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.300 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.300 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.300 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.300 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.300 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.300 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.300 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.300 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.300 "name": "Existed_Raid", 00:10:56.300 "uuid": "631dec3b-9155-4b13-979f-42a68aa399df", 00:10:56.300 "strip_size_kb": 0, 00:10:56.300 "state": "online", 00:10:56.300 "raid_level": "raid1", 00:10:56.300 "superblock": false, 00:10:56.300 "num_base_bdevs": 4, 00:10:56.300 "num_base_bdevs_discovered": 4, 00:10:56.300 "num_base_bdevs_operational": 4, 00:10:56.300 "base_bdevs_list": [ 00:10:56.300 { 00:10:56.300 "name": "BaseBdev1", 00:10:56.300 "uuid": "ec99d276-4d53-4633-be3c-06a7b9175598", 00:10:56.300 "is_configured": true, 00:10:56.300 "data_offset": 0, 00:10:56.300 "data_size": 65536 00:10:56.300 }, 00:10:56.300 { 00:10:56.300 "name": "BaseBdev2", 00:10:56.300 "uuid": "8c647b2b-e161-440a-873c-32c0c73c7456", 00:10:56.300 "is_configured": true, 00:10:56.300 "data_offset": 0, 00:10:56.300 "data_size": 65536 00:10:56.300 }, 00:10:56.300 { 00:10:56.300 "name": "BaseBdev3", 00:10:56.300 "uuid": "2632d3c4-a850-4752-92fa-8dc6f3114378", 00:10:56.300 "is_configured": true, 00:10:56.300 "data_offset": 0, 00:10:56.300 "data_size": 65536 00:10:56.300 }, 00:10:56.300 { 00:10:56.300 "name": "BaseBdev4", 00:10:56.300 "uuid": "dde030e4-e203-4928-a7aa-64453d842ad3", 00:10:56.300 "is_configured": true, 00:10:56.300 "data_offset": 0, 00:10:56.300 "data_size": 65536 00:10:56.300 } 00:10:56.300 ] 00:10:56.300 }' 00:10:56.300 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.300 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.559 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:56.559 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:56.559 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:56.559 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:56.559 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:56.559 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:56.559 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:56.559 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:56.559 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.559 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.819 [2024-11-18 03:11:00.138165] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:56.819 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.819 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:56.819 "name": "Existed_Raid", 00:10:56.819 "aliases": [ 00:10:56.819 "631dec3b-9155-4b13-979f-42a68aa399df" 00:10:56.819 ], 00:10:56.819 "product_name": "Raid Volume", 00:10:56.819 "block_size": 512, 00:10:56.819 "num_blocks": 65536, 00:10:56.819 "uuid": "631dec3b-9155-4b13-979f-42a68aa399df", 00:10:56.819 "assigned_rate_limits": { 00:10:56.819 "rw_ios_per_sec": 0, 00:10:56.819 "rw_mbytes_per_sec": 0, 00:10:56.819 "r_mbytes_per_sec": 0, 00:10:56.819 "w_mbytes_per_sec": 0 00:10:56.819 }, 00:10:56.819 "claimed": false, 00:10:56.819 "zoned": false, 00:10:56.819 "supported_io_types": { 00:10:56.819 "read": true, 00:10:56.819 "write": true, 00:10:56.819 "unmap": false, 00:10:56.819 "flush": false, 00:10:56.819 "reset": true, 00:10:56.819 "nvme_admin": false, 00:10:56.819 "nvme_io": false, 00:10:56.819 "nvme_io_md": false, 00:10:56.819 "write_zeroes": true, 00:10:56.819 "zcopy": false, 00:10:56.819 "get_zone_info": false, 00:10:56.819 "zone_management": false, 00:10:56.819 "zone_append": false, 00:10:56.819 "compare": false, 00:10:56.819 "compare_and_write": false, 00:10:56.819 "abort": false, 00:10:56.819 "seek_hole": false, 00:10:56.819 "seek_data": false, 00:10:56.819 "copy": false, 00:10:56.819 "nvme_iov_md": false 00:10:56.819 }, 00:10:56.819 "memory_domains": [ 00:10:56.819 { 00:10:56.819 "dma_device_id": "system", 00:10:56.819 "dma_device_type": 1 00:10:56.819 }, 00:10:56.819 { 00:10:56.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.819 "dma_device_type": 2 00:10:56.819 }, 00:10:56.819 { 00:10:56.819 "dma_device_id": "system", 00:10:56.819 "dma_device_type": 1 00:10:56.819 }, 00:10:56.819 { 00:10:56.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.819 "dma_device_type": 2 00:10:56.819 }, 00:10:56.819 { 00:10:56.819 "dma_device_id": "system", 00:10:56.819 "dma_device_type": 1 00:10:56.819 }, 00:10:56.819 { 00:10:56.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.819 "dma_device_type": 2 00:10:56.819 }, 00:10:56.819 { 00:10:56.819 "dma_device_id": "system", 00:10:56.819 "dma_device_type": 1 00:10:56.819 }, 00:10:56.819 { 00:10:56.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.819 "dma_device_type": 2 00:10:56.819 } 00:10:56.819 ], 00:10:56.819 "driver_specific": { 00:10:56.819 "raid": { 00:10:56.819 "uuid": "631dec3b-9155-4b13-979f-42a68aa399df", 00:10:56.819 "strip_size_kb": 0, 00:10:56.819 "state": "online", 00:10:56.819 "raid_level": "raid1", 00:10:56.819 "superblock": false, 00:10:56.819 "num_base_bdevs": 4, 00:10:56.819 "num_base_bdevs_discovered": 4, 00:10:56.819 "num_base_bdevs_operational": 4, 00:10:56.819 "base_bdevs_list": [ 00:10:56.819 { 00:10:56.819 "name": "BaseBdev1", 00:10:56.819 "uuid": "ec99d276-4d53-4633-be3c-06a7b9175598", 00:10:56.819 "is_configured": true, 00:10:56.819 "data_offset": 0, 00:10:56.819 "data_size": 65536 00:10:56.819 }, 00:10:56.819 { 00:10:56.819 "name": "BaseBdev2", 00:10:56.819 "uuid": "8c647b2b-e161-440a-873c-32c0c73c7456", 00:10:56.819 "is_configured": true, 00:10:56.819 "data_offset": 0, 00:10:56.819 "data_size": 65536 00:10:56.819 }, 00:10:56.819 { 00:10:56.819 "name": "BaseBdev3", 00:10:56.819 "uuid": "2632d3c4-a850-4752-92fa-8dc6f3114378", 00:10:56.819 "is_configured": true, 00:10:56.819 "data_offset": 0, 00:10:56.819 "data_size": 65536 00:10:56.819 }, 00:10:56.819 { 00:10:56.819 "name": "BaseBdev4", 00:10:56.819 "uuid": "dde030e4-e203-4928-a7aa-64453d842ad3", 00:10:56.820 "is_configured": true, 00:10:56.820 "data_offset": 0, 00:10:56.820 "data_size": 65536 00:10:56.820 } 00:10:56.820 ] 00:10:56.820 } 00:10:56.820 } 00:10:56.820 }' 00:10:56.820 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:56.820 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:56.820 BaseBdev2 00:10:56.820 BaseBdev3 00:10:56.820 BaseBdev4' 00:10:56.820 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.820 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:56.820 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.820 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:56.820 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.820 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.820 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.820 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.820 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.820 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.820 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.820 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.820 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:56.820 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.820 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.820 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.820 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.820 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.820 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.820 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:56.820 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.820 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.820 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.820 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.079 [2024-11-18 03:11:00.465282] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.079 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.079 "name": "Existed_Raid", 00:10:57.079 "uuid": "631dec3b-9155-4b13-979f-42a68aa399df", 00:10:57.079 "strip_size_kb": 0, 00:10:57.079 "state": "online", 00:10:57.079 "raid_level": "raid1", 00:10:57.079 "superblock": false, 00:10:57.079 "num_base_bdevs": 4, 00:10:57.079 "num_base_bdevs_discovered": 3, 00:10:57.080 "num_base_bdevs_operational": 3, 00:10:57.080 "base_bdevs_list": [ 00:10:57.080 { 00:10:57.080 "name": null, 00:10:57.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.080 "is_configured": false, 00:10:57.080 "data_offset": 0, 00:10:57.080 "data_size": 65536 00:10:57.080 }, 00:10:57.080 { 00:10:57.080 "name": "BaseBdev2", 00:10:57.080 "uuid": "8c647b2b-e161-440a-873c-32c0c73c7456", 00:10:57.080 "is_configured": true, 00:10:57.080 "data_offset": 0, 00:10:57.080 "data_size": 65536 00:10:57.080 }, 00:10:57.080 { 00:10:57.080 "name": "BaseBdev3", 00:10:57.080 "uuid": "2632d3c4-a850-4752-92fa-8dc6f3114378", 00:10:57.080 "is_configured": true, 00:10:57.080 "data_offset": 0, 00:10:57.080 "data_size": 65536 00:10:57.080 }, 00:10:57.080 { 00:10:57.080 "name": "BaseBdev4", 00:10:57.080 "uuid": "dde030e4-e203-4928-a7aa-64453d842ad3", 00:10:57.080 "is_configured": true, 00:10:57.080 "data_offset": 0, 00:10:57.080 "data_size": 65536 00:10:57.080 } 00:10:57.080 ] 00:10:57.080 }' 00:10:57.080 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.080 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.648 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:57.648 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:57.648 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.648 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.649 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.649 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:57.649 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.649 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:57.649 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:57.649 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:57.649 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.649 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.649 [2024-11-18 03:11:00.963871] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:57.649 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.649 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:57.649 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:57.649 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.649 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:57.649 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.649 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.649 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.649 [2024-11-18 03:11:01.031167] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.649 [2024-11-18 03:11:01.094496] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:57.649 [2024-11-18 03:11:01.094631] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.649 [2024-11-18 03:11:01.106374] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.649 [2024-11-18 03:11:01.106499] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:57.649 [2024-11-18 03:11:01.106542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.649 BaseBdev2 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.649 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.649 [ 00:10:57.649 { 00:10:57.649 "name": "BaseBdev2", 00:10:57.649 "aliases": [ 00:10:57.649 "f81de6d5-b8fc-475b-9339-f0caafb4f99d" 00:10:57.649 ], 00:10:57.649 "product_name": "Malloc disk", 00:10:57.649 "block_size": 512, 00:10:57.649 "num_blocks": 65536, 00:10:57.649 "uuid": "f81de6d5-b8fc-475b-9339-f0caafb4f99d", 00:10:57.649 "assigned_rate_limits": { 00:10:57.649 "rw_ios_per_sec": 0, 00:10:57.649 "rw_mbytes_per_sec": 0, 00:10:57.649 "r_mbytes_per_sec": 0, 00:10:57.649 "w_mbytes_per_sec": 0 00:10:57.649 }, 00:10:57.649 "claimed": false, 00:10:57.649 "zoned": false, 00:10:57.649 "supported_io_types": { 00:10:57.649 "read": true, 00:10:57.649 "write": true, 00:10:57.649 "unmap": true, 00:10:57.649 "flush": true, 00:10:57.649 "reset": true, 00:10:57.649 "nvme_admin": false, 00:10:57.649 "nvme_io": false, 00:10:57.649 "nvme_io_md": false, 00:10:57.649 "write_zeroes": true, 00:10:57.649 "zcopy": true, 00:10:57.649 "get_zone_info": false, 00:10:57.649 "zone_management": false, 00:10:57.649 "zone_append": false, 00:10:57.649 "compare": false, 00:10:57.649 "compare_and_write": false, 00:10:57.649 "abort": true, 00:10:57.649 "seek_hole": false, 00:10:57.649 "seek_data": false, 00:10:57.649 "copy": true, 00:10:57.649 "nvme_iov_md": false 00:10:57.649 }, 00:10:57.650 "memory_domains": [ 00:10:57.650 { 00:10:57.650 "dma_device_id": "system", 00:10:57.650 "dma_device_type": 1 00:10:57.650 }, 00:10:57.650 { 00:10:57.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.650 "dma_device_type": 2 00:10:57.650 } 00:10:57.650 ], 00:10:57.650 "driver_specific": {} 00:10:57.650 } 00:10:57.650 ] 00:10:57.650 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.650 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:57.650 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:57.650 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.650 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:57.650 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.650 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.650 BaseBdev3 00:10:57.650 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.650 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:57.650 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:57.650 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:57.650 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:57.650 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:57.650 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:57.650 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:57.650 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.650 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.650 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.650 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:57.650 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.650 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.909 [ 00:10:57.909 { 00:10:57.909 "name": "BaseBdev3", 00:10:57.909 "aliases": [ 00:10:57.909 "96d2c05b-de9d-4ac7-843b-14daca7645fc" 00:10:57.909 ], 00:10:57.909 "product_name": "Malloc disk", 00:10:57.909 "block_size": 512, 00:10:57.909 "num_blocks": 65536, 00:10:57.909 "uuid": "96d2c05b-de9d-4ac7-843b-14daca7645fc", 00:10:57.910 "assigned_rate_limits": { 00:10:57.910 "rw_ios_per_sec": 0, 00:10:57.910 "rw_mbytes_per_sec": 0, 00:10:57.910 "r_mbytes_per_sec": 0, 00:10:57.910 "w_mbytes_per_sec": 0 00:10:57.910 }, 00:10:57.910 "claimed": false, 00:10:57.910 "zoned": false, 00:10:57.910 "supported_io_types": { 00:10:57.910 "read": true, 00:10:57.910 "write": true, 00:10:57.910 "unmap": true, 00:10:57.910 "flush": true, 00:10:57.910 "reset": true, 00:10:57.910 "nvme_admin": false, 00:10:57.910 "nvme_io": false, 00:10:57.910 "nvme_io_md": false, 00:10:57.910 "write_zeroes": true, 00:10:57.910 "zcopy": true, 00:10:57.910 "get_zone_info": false, 00:10:57.910 "zone_management": false, 00:10:57.910 "zone_append": false, 00:10:57.910 "compare": false, 00:10:57.910 "compare_and_write": false, 00:10:57.910 "abort": true, 00:10:57.910 "seek_hole": false, 00:10:57.910 "seek_data": false, 00:10:57.910 "copy": true, 00:10:57.910 "nvme_iov_md": false 00:10:57.910 }, 00:10:57.910 "memory_domains": [ 00:10:57.910 { 00:10:57.910 "dma_device_id": "system", 00:10:57.910 "dma_device_type": 1 00:10:57.910 }, 00:10:57.910 { 00:10:57.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.910 "dma_device_type": 2 00:10:57.910 } 00:10:57.910 ], 00:10:57.910 "driver_specific": {} 00:10:57.910 } 00:10:57.910 ] 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.910 BaseBdev4 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.910 [ 00:10:57.910 { 00:10:57.910 "name": "BaseBdev4", 00:10:57.910 "aliases": [ 00:10:57.910 "7986acca-65e1-432a-aa57-78e9fa43d01d" 00:10:57.910 ], 00:10:57.910 "product_name": "Malloc disk", 00:10:57.910 "block_size": 512, 00:10:57.910 "num_blocks": 65536, 00:10:57.910 "uuid": "7986acca-65e1-432a-aa57-78e9fa43d01d", 00:10:57.910 "assigned_rate_limits": { 00:10:57.910 "rw_ios_per_sec": 0, 00:10:57.910 "rw_mbytes_per_sec": 0, 00:10:57.910 "r_mbytes_per_sec": 0, 00:10:57.910 "w_mbytes_per_sec": 0 00:10:57.910 }, 00:10:57.910 "claimed": false, 00:10:57.910 "zoned": false, 00:10:57.910 "supported_io_types": { 00:10:57.910 "read": true, 00:10:57.910 "write": true, 00:10:57.910 "unmap": true, 00:10:57.910 "flush": true, 00:10:57.910 "reset": true, 00:10:57.910 "nvme_admin": false, 00:10:57.910 "nvme_io": false, 00:10:57.910 "nvme_io_md": false, 00:10:57.910 "write_zeroes": true, 00:10:57.910 "zcopy": true, 00:10:57.910 "get_zone_info": false, 00:10:57.910 "zone_management": false, 00:10:57.910 "zone_append": false, 00:10:57.910 "compare": false, 00:10:57.910 "compare_and_write": false, 00:10:57.910 "abort": true, 00:10:57.910 "seek_hole": false, 00:10:57.910 "seek_data": false, 00:10:57.910 "copy": true, 00:10:57.910 "nvme_iov_md": false 00:10:57.910 }, 00:10:57.910 "memory_domains": [ 00:10:57.910 { 00:10:57.910 "dma_device_id": "system", 00:10:57.910 "dma_device_type": 1 00:10:57.910 }, 00:10:57.910 { 00:10:57.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.910 "dma_device_type": 2 00:10:57.910 } 00:10:57.910 ], 00:10:57.910 "driver_specific": {} 00:10:57.910 } 00:10:57.910 ] 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.910 [2024-11-18 03:11:01.279894] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:57.910 [2024-11-18 03:11:01.279999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:57.910 [2024-11-18 03:11:01.280040] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:57.910 [2024-11-18 03:11:01.281888] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:57.910 [2024-11-18 03:11:01.281986] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.910 "name": "Existed_Raid", 00:10:57.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.910 "strip_size_kb": 0, 00:10:57.910 "state": "configuring", 00:10:57.910 "raid_level": "raid1", 00:10:57.910 "superblock": false, 00:10:57.910 "num_base_bdevs": 4, 00:10:57.910 "num_base_bdevs_discovered": 3, 00:10:57.910 "num_base_bdevs_operational": 4, 00:10:57.910 "base_bdevs_list": [ 00:10:57.910 { 00:10:57.910 "name": "BaseBdev1", 00:10:57.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.910 "is_configured": false, 00:10:57.910 "data_offset": 0, 00:10:57.910 "data_size": 0 00:10:57.910 }, 00:10:57.910 { 00:10:57.910 "name": "BaseBdev2", 00:10:57.910 "uuid": "f81de6d5-b8fc-475b-9339-f0caafb4f99d", 00:10:57.910 "is_configured": true, 00:10:57.910 "data_offset": 0, 00:10:57.910 "data_size": 65536 00:10:57.910 }, 00:10:57.910 { 00:10:57.910 "name": "BaseBdev3", 00:10:57.910 "uuid": "96d2c05b-de9d-4ac7-843b-14daca7645fc", 00:10:57.910 "is_configured": true, 00:10:57.910 "data_offset": 0, 00:10:57.910 "data_size": 65536 00:10:57.910 }, 00:10:57.910 { 00:10:57.910 "name": "BaseBdev4", 00:10:57.910 "uuid": "7986acca-65e1-432a-aa57-78e9fa43d01d", 00:10:57.910 "is_configured": true, 00:10:57.910 "data_offset": 0, 00:10:57.910 "data_size": 65536 00:10:57.910 } 00:10:57.910 ] 00:10:57.910 }' 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.910 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.479 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:58.479 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.479 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.479 [2024-11-18 03:11:01.779104] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:58.479 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.479 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:58.479 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.479 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.479 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.479 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.479 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.479 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.479 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.479 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.479 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.479 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.479 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.479 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.479 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.479 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.479 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.479 "name": "Existed_Raid", 00:10:58.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.479 "strip_size_kb": 0, 00:10:58.479 "state": "configuring", 00:10:58.479 "raid_level": "raid1", 00:10:58.479 "superblock": false, 00:10:58.479 "num_base_bdevs": 4, 00:10:58.479 "num_base_bdevs_discovered": 2, 00:10:58.479 "num_base_bdevs_operational": 4, 00:10:58.479 "base_bdevs_list": [ 00:10:58.479 { 00:10:58.479 "name": "BaseBdev1", 00:10:58.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.479 "is_configured": false, 00:10:58.479 "data_offset": 0, 00:10:58.479 "data_size": 0 00:10:58.479 }, 00:10:58.479 { 00:10:58.479 "name": null, 00:10:58.479 "uuid": "f81de6d5-b8fc-475b-9339-f0caafb4f99d", 00:10:58.479 "is_configured": false, 00:10:58.479 "data_offset": 0, 00:10:58.479 "data_size": 65536 00:10:58.479 }, 00:10:58.479 { 00:10:58.479 "name": "BaseBdev3", 00:10:58.479 "uuid": "96d2c05b-de9d-4ac7-843b-14daca7645fc", 00:10:58.479 "is_configured": true, 00:10:58.479 "data_offset": 0, 00:10:58.479 "data_size": 65536 00:10:58.479 }, 00:10:58.479 { 00:10:58.479 "name": "BaseBdev4", 00:10:58.479 "uuid": "7986acca-65e1-432a-aa57-78e9fa43d01d", 00:10:58.479 "is_configured": true, 00:10:58.479 "data_offset": 0, 00:10:58.479 "data_size": 65536 00:10:58.479 } 00:10:58.479 ] 00:10:58.479 }' 00:10:58.479 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.479 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.739 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.739 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:58.739 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.739 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.739 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.739 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:58.739 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:58.739 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.739 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.739 [2024-11-18 03:11:02.285302] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:58.739 BaseBdev1 00:10:58.739 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.739 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:58.739 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:58.739 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:58.739 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:58.739 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:58.739 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:58.739 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:58.739 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.739 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.739 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.739 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:58.739 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.739 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.739 [ 00:10:58.739 { 00:10:58.739 "name": "BaseBdev1", 00:10:58.739 "aliases": [ 00:10:58.739 "e3ab6b46-ec67-455d-bd9f-eba34b2a265c" 00:10:58.739 ], 00:10:58.739 "product_name": "Malloc disk", 00:10:58.739 "block_size": 512, 00:10:58.739 "num_blocks": 65536, 00:10:58.739 "uuid": "e3ab6b46-ec67-455d-bd9f-eba34b2a265c", 00:10:58.739 "assigned_rate_limits": { 00:10:58.739 "rw_ios_per_sec": 0, 00:10:58.739 "rw_mbytes_per_sec": 0, 00:10:58.739 "r_mbytes_per_sec": 0, 00:10:58.739 "w_mbytes_per_sec": 0 00:10:58.998 }, 00:10:58.998 "claimed": true, 00:10:58.998 "claim_type": "exclusive_write", 00:10:58.998 "zoned": false, 00:10:58.998 "supported_io_types": { 00:10:58.998 "read": true, 00:10:58.998 "write": true, 00:10:58.998 "unmap": true, 00:10:58.998 "flush": true, 00:10:58.998 "reset": true, 00:10:58.998 "nvme_admin": false, 00:10:58.998 "nvme_io": false, 00:10:58.998 "nvme_io_md": false, 00:10:58.998 "write_zeroes": true, 00:10:58.998 "zcopy": true, 00:10:58.998 "get_zone_info": false, 00:10:58.998 "zone_management": false, 00:10:58.998 "zone_append": false, 00:10:58.998 "compare": false, 00:10:58.998 "compare_and_write": false, 00:10:58.998 "abort": true, 00:10:58.998 "seek_hole": false, 00:10:58.998 "seek_data": false, 00:10:58.998 "copy": true, 00:10:58.998 "nvme_iov_md": false 00:10:58.998 }, 00:10:58.998 "memory_domains": [ 00:10:58.998 { 00:10:58.998 "dma_device_id": "system", 00:10:58.998 "dma_device_type": 1 00:10:58.998 }, 00:10:58.998 { 00:10:58.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.998 "dma_device_type": 2 00:10:58.998 } 00:10:58.998 ], 00:10:58.998 "driver_specific": {} 00:10:58.998 } 00:10:58.998 ] 00:10:58.998 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.998 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:58.998 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:58.998 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.998 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.998 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.998 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.998 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.998 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.998 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.998 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.998 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.998 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.998 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.998 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.998 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.998 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.998 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.998 "name": "Existed_Raid", 00:10:58.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.998 "strip_size_kb": 0, 00:10:58.998 "state": "configuring", 00:10:58.998 "raid_level": "raid1", 00:10:58.998 "superblock": false, 00:10:58.998 "num_base_bdevs": 4, 00:10:58.998 "num_base_bdevs_discovered": 3, 00:10:58.998 "num_base_bdevs_operational": 4, 00:10:58.998 "base_bdevs_list": [ 00:10:58.998 { 00:10:58.998 "name": "BaseBdev1", 00:10:58.998 "uuid": "e3ab6b46-ec67-455d-bd9f-eba34b2a265c", 00:10:58.998 "is_configured": true, 00:10:58.998 "data_offset": 0, 00:10:58.998 "data_size": 65536 00:10:58.998 }, 00:10:58.998 { 00:10:58.998 "name": null, 00:10:58.998 "uuid": "f81de6d5-b8fc-475b-9339-f0caafb4f99d", 00:10:58.998 "is_configured": false, 00:10:58.998 "data_offset": 0, 00:10:58.998 "data_size": 65536 00:10:58.998 }, 00:10:58.999 { 00:10:58.999 "name": "BaseBdev3", 00:10:58.999 "uuid": "96d2c05b-de9d-4ac7-843b-14daca7645fc", 00:10:58.999 "is_configured": true, 00:10:58.999 "data_offset": 0, 00:10:58.999 "data_size": 65536 00:10:58.999 }, 00:10:58.999 { 00:10:58.999 "name": "BaseBdev4", 00:10:58.999 "uuid": "7986acca-65e1-432a-aa57-78e9fa43d01d", 00:10:58.999 "is_configured": true, 00:10:58.999 "data_offset": 0, 00:10:58.999 "data_size": 65536 00:10:58.999 } 00:10:58.999 ] 00:10:58.999 }' 00:10:58.999 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.999 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.259 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:59.259 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.259 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.259 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.259 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.259 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:59.259 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:59.259 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.259 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.259 [2024-11-18 03:11:02.776507] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:59.259 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.259 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:59.259 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.259 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.259 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.259 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.259 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.259 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.259 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.259 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.259 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.259 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.259 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.259 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.259 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.259 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.519 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.519 "name": "Existed_Raid", 00:10:59.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.519 "strip_size_kb": 0, 00:10:59.519 "state": "configuring", 00:10:59.519 "raid_level": "raid1", 00:10:59.519 "superblock": false, 00:10:59.519 "num_base_bdevs": 4, 00:10:59.519 "num_base_bdevs_discovered": 2, 00:10:59.519 "num_base_bdevs_operational": 4, 00:10:59.519 "base_bdevs_list": [ 00:10:59.519 { 00:10:59.519 "name": "BaseBdev1", 00:10:59.519 "uuid": "e3ab6b46-ec67-455d-bd9f-eba34b2a265c", 00:10:59.519 "is_configured": true, 00:10:59.519 "data_offset": 0, 00:10:59.519 "data_size": 65536 00:10:59.519 }, 00:10:59.519 { 00:10:59.519 "name": null, 00:10:59.519 "uuid": "f81de6d5-b8fc-475b-9339-f0caafb4f99d", 00:10:59.519 "is_configured": false, 00:10:59.519 "data_offset": 0, 00:10:59.519 "data_size": 65536 00:10:59.519 }, 00:10:59.519 { 00:10:59.519 "name": null, 00:10:59.519 "uuid": "96d2c05b-de9d-4ac7-843b-14daca7645fc", 00:10:59.519 "is_configured": false, 00:10:59.519 "data_offset": 0, 00:10:59.519 "data_size": 65536 00:10:59.519 }, 00:10:59.519 { 00:10:59.519 "name": "BaseBdev4", 00:10:59.519 "uuid": "7986acca-65e1-432a-aa57-78e9fa43d01d", 00:10:59.519 "is_configured": true, 00:10:59.519 "data_offset": 0, 00:10:59.519 "data_size": 65536 00:10:59.519 } 00:10:59.519 ] 00:10:59.519 }' 00:10:59.519 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.519 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.780 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.780 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:59.780 03:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.780 03:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.780 03:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.780 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:59.780 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:59.780 03:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.780 03:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.780 [2024-11-18 03:11:03.263728] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:59.780 03:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.780 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:59.780 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.780 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.780 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.780 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.780 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.780 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.780 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.780 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.780 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.780 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.780 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.780 03:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.780 03:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.780 03:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.780 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.780 "name": "Existed_Raid", 00:10:59.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.780 "strip_size_kb": 0, 00:10:59.780 "state": "configuring", 00:10:59.780 "raid_level": "raid1", 00:10:59.780 "superblock": false, 00:10:59.780 "num_base_bdevs": 4, 00:10:59.780 "num_base_bdevs_discovered": 3, 00:10:59.780 "num_base_bdevs_operational": 4, 00:10:59.780 "base_bdevs_list": [ 00:10:59.780 { 00:10:59.780 "name": "BaseBdev1", 00:10:59.780 "uuid": "e3ab6b46-ec67-455d-bd9f-eba34b2a265c", 00:10:59.780 "is_configured": true, 00:10:59.780 "data_offset": 0, 00:10:59.780 "data_size": 65536 00:10:59.780 }, 00:10:59.780 { 00:10:59.780 "name": null, 00:10:59.780 "uuid": "f81de6d5-b8fc-475b-9339-f0caafb4f99d", 00:10:59.780 "is_configured": false, 00:10:59.780 "data_offset": 0, 00:10:59.780 "data_size": 65536 00:10:59.780 }, 00:10:59.780 { 00:10:59.780 "name": "BaseBdev3", 00:10:59.780 "uuid": "96d2c05b-de9d-4ac7-843b-14daca7645fc", 00:10:59.780 "is_configured": true, 00:10:59.780 "data_offset": 0, 00:10:59.780 "data_size": 65536 00:10:59.780 }, 00:10:59.780 { 00:10:59.780 "name": "BaseBdev4", 00:10:59.780 "uuid": "7986acca-65e1-432a-aa57-78e9fa43d01d", 00:10:59.780 "is_configured": true, 00:10:59.780 "data_offset": 0, 00:10:59.780 "data_size": 65536 00:10:59.780 } 00:10:59.780 ] 00:10:59.780 }' 00:10:59.780 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.780 03:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.350 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.351 03:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.351 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:00.351 03:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.351 03:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.351 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:00.351 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:00.351 03:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.351 03:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.351 [2024-11-18 03:11:03.747010] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:00.351 03:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.351 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:00.351 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.351 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.351 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.351 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.351 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.351 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.351 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.351 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.351 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.351 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.351 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.351 03:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.351 03:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.351 03:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.351 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.351 "name": "Existed_Raid", 00:11:00.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.351 "strip_size_kb": 0, 00:11:00.351 "state": "configuring", 00:11:00.351 "raid_level": "raid1", 00:11:00.351 "superblock": false, 00:11:00.351 "num_base_bdevs": 4, 00:11:00.351 "num_base_bdevs_discovered": 2, 00:11:00.351 "num_base_bdevs_operational": 4, 00:11:00.351 "base_bdevs_list": [ 00:11:00.351 { 00:11:00.351 "name": null, 00:11:00.351 "uuid": "e3ab6b46-ec67-455d-bd9f-eba34b2a265c", 00:11:00.351 "is_configured": false, 00:11:00.351 "data_offset": 0, 00:11:00.351 "data_size": 65536 00:11:00.351 }, 00:11:00.351 { 00:11:00.351 "name": null, 00:11:00.351 "uuid": "f81de6d5-b8fc-475b-9339-f0caafb4f99d", 00:11:00.351 "is_configured": false, 00:11:00.351 "data_offset": 0, 00:11:00.351 "data_size": 65536 00:11:00.351 }, 00:11:00.351 { 00:11:00.351 "name": "BaseBdev3", 00:11:00.351 "uuid": "96d2c05b-de9d-4ac7-843b-14daca7645fc", 00:11:00.351 "is_configured": true, 00:11:00.351 "data_offset": 0, 00:11:00.351 "data_size": 65536 00:11:00.351 }, 00:11:00.351 { 00:11:00.351 "name": "BaseBdev4", 00:11:00.351 "uuid": "7986acca-65e1-432a-aa57-78e9fa43d01d", 00:11:00.351 "is_configured": true, 00:11:00.351 "data_offset": 0, 00:11:00.351 "data_size": 65536 00:11:00.351 } 00:11:00.351 ] 00:11:00.351 }' 00:11:00.351 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.351 03:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.613 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.613 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:00.613 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.613 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.877 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.877 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:00.877 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:00.877 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.877 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.877 [2024-11-18 03:11:04.240768] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:00.877 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.877 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:00.877 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.877 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.877 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.877 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.877 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.877 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.877 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.877 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.877 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.877 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.877 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.877 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.877 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.877 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.877 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.877 "name": "Existed_Raid", 00:11:00.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.877 "strip_size_kb": 0, 00:11:00.877 "state": "configuring", 00:11:00.877 "raid_level": "raid1", 00:11:00.877 "superblock": false, 00:11:00.877 "num_base_bdevs": 4, 00:11:00.877 "num_base_bdevs_discovered": 3, 00:11:00.877 "num_base_bdevs_operational": 4, 00:11:00.877 "base_bdevs_list": [ 00:11:00.877 { 00:11:00.877 "name": null, 00:11:00.877 "uuid": "e3ab6b46-ec67-455d-bd9f-eba34b2a265c", 00:11:00.877 "is_configured": false, 00:11:00.877 "data_offset": 0, 00:11:00.877 "data_size": 65536 00:11:00.877 }, 00:11:00.877 { 00:11:00.877 "name": "BaseBdev2", 00:11:00.877 "uuid": "f81de6d5-b8fc-475b-9339-f0caafb4f99d", 00:11:00.877 "is_configured": true, 00:11:00.877 "data_offset": 0, 00:11:00.877 "data_size": 65536 00:11:00.877 }, 00:11:00.877 { 00:11:00.877 "name": "BaseBdev3", 00:11:00.877 "uuid": "96d2c05b-de9d-4ac7-843b-14daca7645fc", 00:11:00.877 "is_configured": true, 00:11:00.877 "data_offset": 0, 00:11:00.877 "data_size": 65536 00:11:00.877 }, 00:11:00.877 { 00:11:00.877 "name": "BaseBdev4", 00:11:00.877 "uuid": "7986acca-65e1-432a-aa57-78e9fa43d01d", 00:11:00.877 "is_configured": true, 00:11:00.877 "data_offset": 0, 00:11:00.877 "data_size": 65536 00:11:00.877 } 00:11:00.877 ] 00:11:00.877 }' 00:11:00.877 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.877 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.137 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.137 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:01.137 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.137 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.137 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e3ab6b46-ec67-455d-bd9f-eba34b2a265c 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.397 [2024-11-18 03:11:04.794947] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:01.397 [2024-11-18 03:11:04.795038] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:11:01.397 [2024-11-18 03:11:04.795052] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:01.397 [2024-11-18 03:11:04.795298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:01.397 [2024-11-18 03:11:04.795430] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:11:01.397 [2024-11-18 03:11:04.795446] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:11:01.397 [2024-11-18 03:11:04.795625] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.397 NewBaseBdev 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.397 [ 00:11:01.397 { 00:11:01.397 "name": "NewBaseBdev", 00:11:01.397 "aliases": [ 00:11:01.397 "e3ab6b46-ec67-455d-bd9f-eba34b2a265c" 00:11:01.397 ], 00:11:01.397 "product_name": "Malloc disk", 00:11:01.397 "block_size": 512, 00:11:01.397 "num_blocks": 65536, 00:11:01.397 "uuid": "e3ab6b46-ec67-455d-bd9f-eba34b2a265c", 00:11:01.397 "assigned_rate_limits": { 00:11:01.397 "rw_ios_per_sec": 0, 00:11:01.397 "rw_mbytes_per_sec": 0, 00:11:01.397 "r_mbytes_per_sec": 0, 00:11:01.397 "w_mbytes_per_sec": 0 00:11:01.397 }, 00:11:01.397 "claimed": true, 00:11:01.397 "claim_type": "exclusive_write", 00:11:01.397 "zoned": false, 00:11:01.397 "supported_io_types": { 00:11:01.397 "read": true, 00:11:01.397 "write": true, 00:11:01.397 "unmap": true, 00:11:01.397 "flush": true, 00:11:01.397 "reset": true, 00:11:01.397 "nvme_admin": false, 00:11:01.397 "nvme_io": false, 00:11:01.397 "nvme_io_md": false, 00:11:01.397 "write_zeroes": true, 00:11:01.397 "zcopy": true, 00:11:01.397 "get_zone_info": false, 00:11:01.397 "zone_management": false, 00:11:01.397 "zone_append": false, 00:11:01.397 "compare": false, 00:11:01.397 "compare_and_write": false, 00:11:01.397 "abort": true, 00:11:01.397 "seek_hole": false, 00:11:01.397 "seek_data": false, 00:11:01.397 "copy": true, 00:11:01.397 "nvme_iov_md": false 00:11:01.397 }, 00:11:01.397 "memory_domains": [ 00:11:01.397 { 00:11:01.397 "dma_device_id": "system", 00:11:01.397 "dma_device_type": 1 00:11:01.397 }, 00:11:01.397 { 00:11:01.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.397 "dma_device_type": 2 00:11:01.397 } 00:11:01.397 ], 00:11:01.397 "driver_specific": {} 00:11:01.397 } 00:11:01.397 ] 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.397 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.397 "name": "Existed_Raid", 00:11:01.397 "uuid": "852a38ff-967a-4ff6-b4f8-5a5a3a486776", 00:11:01.397 "strip_size_kb": 0, 00:11:01.397 "state": "online", 00:11:01.397 "raid_level": "raid1", 00:11:01.397 "superblock": false, 00:11:01.397 "num_base_bdevs": 4, 00:11:01.397 "num_base_bdevs_discovered": 4, 00:11:01.397 "num_base_bdevs_operational": 4, 00:11:01.397 "base_bdevs_list": [ 00:11:01.397 { 00:11:01.397 "name": "NewBaseBdev", 00:11:01.397 "uuid": "e3ab6b46-ec67-455d-bd9f-eba34b2a265c", 00:11:01.397 "is_configured": true, 00:11:01.397 "data_offset": 0, 00:11:01.397 "data_size": 65536 00:11:01.397 }, 00:11:01.397 { 00:11:01.397 "name": "BaseBdev2", 00:11:01.397 "uuid": "f81de6d5-b8fc-475b-9339-f0caafb4f99d", 00:11:01.397 "is_configured": true, 00:11:01.397 "data_offset": 0, 00:11:01.397 "data_size": 65536 00:11:01.397 }, 00:11:01.397 { 00:11:01.398 "name": "BaseBdev3", 00:11:01.398 "uuid": "96d2c05b-de9d-4ac7-843b-14daca7645fc", 00:11:01.398 "is_configured": true, 00:11:01.398 "data_offset": 0, 00:11:01.398 "data_size": 65536 00:11:01.398 }, 00:11:01.398 { 00:11:01.398 "name": "BaseBdev4", 00:11:01.398 "uuid": "7986acca-65e1-432a-aa57-78e9fa43d01d", 00:11:01.398 "is_configured": true, 00:11:01.398 "data_offset": 0, 00:11:01.398 "data_size": 65536 00:11:01.398 } 00:11:01.398 ] 00:11:01.398 }' 00:11:01.398 03:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.398 03:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.968 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:01.968 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:01.968 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:01.968 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:01.968 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:01.968 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:01.968 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:01.968 03:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.968 03:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.968 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:01.968 [2024-11-18 03:11:05.262512] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.968 03:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.968 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:01.968 "name": "Existed_Raid", 00:11:01.968 "aliases": [ 00:11:01.968 "852a38ff-967a-4ff6-b4f8-5a5a3a486776" 00:11:01.968 ], 00:11:01.968 "product_name": "Raid Volume", 00:11:01.968 "block_size": 512, 00:11:01.968 "num_blocks": 65536, 00:11:01.968 "uuid": "852a38ff-967a-4ff6-b4f8-5a5a3a486776", 00:11:01.968 "assigned_rate_limits": { 00:11:01.968 "rw_ios_per_sec": 0, 00:11:01.968 "rw_mbytes_per_sec": 0, 00:11:01.968 "r_mbytes_per_sec": 0, 00:11:01.968 "w_mbytes_per_sec": 0 00:11:01.968 }, 00:11:01.968 "claimed": false, 00:11:01.968 "zoned": false, 00:11:01.968 "supported_io_types": { 00:11:01.968 "read": true, 00:11:01.968 "write": true, 00:11:01.968 "unmap": false, 00:11:01.968 "flush": false, 00:11:01.968 "reset": true, 00:11:01.968 "nvme_admin": false, 00:11:01.968 "nvme_io": false, 00:11:01.968 "nvme_io_md": false, 00:11:01.968 "write_zeroes": true, 00:11:01.968 "zcopy": false, 00:11:01.968 "get_zone_info": false, 00:11:01.968 "zone_management": false, 00:11:01.968 "zone_append": false, 00:11:01.968 "compare": false, 00:11:01.968 "compare_and_write": false, 00:11:01.968 "abort": false, 00:11:01.968 "seek_hole": false, 00:11:01.968 "seek_data": false, 00:11:01.968 "copy": false, 00:11:01.968 "nvme_iov_md": false 00:11:01.968 }, 00:11:01.968 "memory_domains": [ 00:11:01.968 { 00:11:01.968 "dma_device_id": "system", 00:11:01.968 "dma_device_type": 1 00:11:01.968 }, 00:11:01.968 { 00:11:01.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.968 "dma_device_type": 2 00:11:01.968 }, 00:11:01.968 { 00:11:01.968 "dma_device_id": "system", 00:11:01.968 "dma_device_type": 1 00:11:01.968 }, 00:11:01.968 { 00:11:01.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.969 "dma_device_type": 2 00:11:01.969 }, 00:11:01.969 { 00:11:01.969 "dma_device_id": "system", 00:11:01.969 "dma_device_type": 1 00:11:01.969 }, 00:11:01.969 { 00:11:01.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.969 "dma_device_type": 2 00:11:01.969 }, 00:11:01.969 { 00:11:01.969 "dma_device_id": "system", 00:11:01.969 "dma_device_type": 1 00:11:01.969 }, 00:11:01.969 { 00:11:01.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.969 "dma_device_type": 2 00:11:01.969 } 00:11:01.969 ], 00:11:01.969 "driver_specific": { 00:11:01.969 "raid": { 00:11:01.969 "uuid": "852a38ff-967a-4ff6-b4f8-5a5a3a486776", 00:11:01.969 "strip_size_kb": 0, 00:11:01.969 "state": "online", 00:11:01.969 "raid_level": "raid1", 00:11:01.969 "superblock": false, 00:11:01.969 "num_base_bdevs": 4, 00:11:01.969 "num_base_bdevs_discovered": 4, 00:11:01.969 "num_base_bdevs_operational": 4, 00:11:01.969 "base_bdevs_list": [ 00:11:01.969 { 00:11:01.969 "name": "NewBaseBdev", 00:11:01.969 "uuid": "e3ab6b46-ec67-455d-bd9f-eba34b2a265c", 00:11:01.969 "is_configured": true, 00:11:01.969 "data_offset": 0, 00:11:01.969 "data_size": 65536 00:11:01.969 }, 00:11:01.969 { 00:11:01.969 "name": "BaseBdev2", 00:11:01.969 "uuid": "f81de6d5-b8fc-475b-9339-f0caafb4f99d", 00:11:01.969 "is_configured": true, 00:11:01.969 "data_offset": 0, 00:11:01.969 "data_size": 65536 00:11:01.969 }, 00:11:01.969 { 00:11:01.969 "name": "BaseBdev3", 00:11:01.969 "uuid": "96d2c05b-de9d-4ac7-843b-14daca7645fc", 00:11:01.969 "is_configured": true, 00:11:01.969 "data_offset": 0, 00:11:01.969 "data_size": 65536 00:11:01.969 }, 00:11:01.969 { 00:11:01.969 "name": "BaseBdev4", 00:11:01.969 "uuid": "7986acca-65e1-432a-aa57-78e9fa43d01d", 00:11:01.969 "is_configured": true, 00:11:01.969 "data_offset": 0, 00:11:01.969 "data_size": 65536 00:11:01.969 } 00:11:01.969 ] 00:11:01.969 } 00:11:01.969 } 00:11:01.969 }' 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:01.969 BaseBdev2 00:11:01.969 BaseBdev3 00:11:01.969 BaseBdev4' 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.969 03:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.229 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.229 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.229 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:02.229 03:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.229 03:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.229 [2024-11-18 03:11:05.557660] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:02.229 [2024-11-18 03:11:05.557733] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:02.229 [2024-11-18 03:11:05.557835] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.229 [2024-11-18 03:11:05.558129] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:02.229 [2024-11-18 03:11:05.558195] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:11:02.229 03:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.229 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 84130 00:11:02.229 03:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 84130 ']' 00:11:02.229 03:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 84130 00:11:02.229 03:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:02.229 03:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:02.229 03:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84130 00:11:02.229 killing process with pid 84130 00:11:02.229 03:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:02.230 03:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:02.230 03:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84130' 00:11:02.230 03:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 84130 00:11:02.230 [2024-11-18 03:11:05.603161] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:02.230 03:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 84130 00:11:02.230 [2024-11-18 03:11:05.644928] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:02.491 ************************************ 00:11:02.491 END TEST raid_state_function_test 00:11:02.491 ************************************ 00:11:02.491 00:11:02.491 real 0m9.491s 00:11:02.491 user 0m16.218s 00:11:02.491 sys 0m1.928s 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.491 03:11:05 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:02.491 03:11:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:02.491 03:11:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:02.491 03:11:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:02.491 ************************************ 00:11:02.491 START TEST raid_state_function_test_sb 00:11:02.491 ************************************ 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84785 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84785' 00:11:02.491 Process raid pid: 84785 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84785 00:11:02.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 84785 ']' 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:02.491 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.491 [2024-11-18 03:11:06.054196] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:02.491 [2024-11-18 03:11:06.054411] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.751 [2024-11-18 03:11:06.207936] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.751 [2024-11-18 03:11:06.258525] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.751 [2024-11-18 03:11:06.301681] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.751 [2024-11-18 03:11:06.301805] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.321 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:03.321 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:03.321 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:03.321 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.321 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.321 [2024-11-18 03:11:06.895647] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:03.321 [2024-11-18 03:11:06.895701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:03.321 [2024-11-18 03:11:06.895715] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:03.321 [2024-11-18 03:11:06.895726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:03.321 [2024-11-18 03:11:06.895736] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:03.321 [2024-11-18 03:11:06.895787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:03.321 [2024-11-18 03:11:06.895795] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:03.321 [2024-11-18 03:11:06.895804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:03.582 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.582 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:03.582 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.582 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.582 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.582 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.582 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.582 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.582 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.582 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.582 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.582 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.582 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.582 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.582 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.582 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.582 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.582 "name": "Existed_Raid", 00:11:03.582 "uuid": "19fca24e-c230-48ab-a2c1-53c372c8da42", 00:11:03.582 "strip_size_kb": 0, 00:11:03.582 "state": "configuring", 00:11:03.582 "raid_level": "raid1", 00:11:03.582 "superblock": true, 00:11:03.582 "num_base_bdevs": 4, 00:11:03.582 "num_base_bdevs_discovered": 0, 00:11:03.582 "num_base_bdevs_operational": 4, 00:11:03.582 "base_bdevs_list": [ 00:11:03.582 { 00:11:03.582 "name": "BaseBdev1", 00:11:03.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.582 "is_configured": false, 00:11:03.582 "data_offset": 0, 00:11:03.582 "data_size": 0 00:11:03.582 }, 00:11:03.582 { 00:11:03.582 "name": "BaseBdev2", 00:11:03.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.582 "is_configured": false, 00:11:03.582 "data_offset": 0, 00:11:03.582 "data_size": 0 00:11:03.582 }, 00:11:03.582 { 00:11:03.582 "name": "BaseBdev3", 00:11:03.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.582 "is_configured": false, 00:11:03.582 "data_offset": 0, 00:11:03.582 "data_size": 0 00:11:03.582 }, 00:11:03.582 { 00:11:03.582 "name": "BaseBdev4", 00:11:03.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.582 "is_configured": false, 00:11:03.582 "data_offset": 0, 00:11:03.582 "data_size": 0 00:11:03.582 } 00:11:03.582 ] 00:11:03.582 }' 00:11:03.582 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.582 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.842 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:03.842 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.842 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.843 [2024-11-18 03:11:07.326804] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:03.843 [2024-11-18 03:11:07.326919] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.843 [2024-11-18 03:11:07.338841] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:03.843 [2024-11-18 03:11:07.338939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:03.843 [2024-11-18 03:11:07.338993] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:03.843 [2024-11-18 03:11:07.339049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:03.843 [2024-11-18 03:11:07.339069] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:03.843 [2024-11-18 03:11:07.339091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:03.843 [2024-11-18 03:11:07.339114] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:03.843 [2024-11-18 03:11:07.339145] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.843 [2024-11-18 03:11:07.359878] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:03.843 BaseBdev1 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.843 [ 00:11:03.843 { 00:11:03.843 "name": "BaseBdev1", 00:11:03.843 "aliases": [ 00:11:03.843 "70ac2e9c-0e88-4770-bd1b-8626d50063df" 00:11:03.843 ], 00:11:03.843 "product_name": "Malloc disk", 00:11:03.843 "block_size": 512, 00:11:03.843 "num_blocks": 65536, 00:11:03.843 "uuid": "70ac2e9c-0e88-4770-bd1b-8626d50063df", 00:11:03.843 "assigned_rate_limits": { 00:11:03.843 "rw_ios_per_sec": 0, 00:11:03.843 "rw_mbytes_per_sec": 0, 00:11:03.843 "r_mbytes_per_sec": 0, 00:11:03.843 "w_mbytes_per_sec": 0 00:11:03.843 }, 00:11:03.843 "claimed": true, 00:11:03.843 "claim_type": "exclusive_write", 00:11:03.843 "zoned": false, 00:11:03.843 "supported_io_types": { 00:11:03.843 "read": true, 00:11:03.843 "write": true, 00:11:03.843 "unmap": true, 00:11:03.843 "flush": true, 00:11:03.843 "reset": true, 00:11:03.843 "nvme_admin": false, 00:11:03.843 "nvme_io": false, 00:11:03.843 "nvme_io_md": false, 00:11:03.843 "write_zeroes": true, 00:11:03.843 "zcopy": true, 00:11:03.843 "get_zone_info": false, 00:11:03.843 "zone_management": false, 00:11:03.843 "zone_append": false, 00:11:03.843 "compare": false, 00:11:03.843 "compare_and_write": false, 00:11:03.843 "abort": true, 00:11:03.843 "seek_hole": false, 00:11:03.843 "seek_data": false, 00:11:03.843 "copy": true, 00:11:03.843 "nvme_iov_md": false 00:11:03.843 }, 00:11:03.843 "memory_domains": [ 00:11:03.843 { 00:11:03.843 "dma_device_id": "system", 00:11:03.843 "dma_device_type": 1 00:11:03.843 }, 00:11:03.843 { 00:11:03.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.843 "dma_device_type": 2 00:11:03.843 } 00:11:03.843 ], 00:11:03.843 "driver_specific": {} 00:11:03.843 } 00:11:03.843 ] 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.843 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.103 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.103 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.103 "name": "Existed_Raid", 00:11:04.103 "uuid": "25074eb1-7f17-4f75-8989-0606c3950254", 00:11:04.103 "strip_size_kb": 0, 00:11:04.103 "state": "configuring", 00:11:04.103 "raid_level": "raid1", 00:11:04.103 "superblock": true, 00:11:04.103 "num_base_bdevs": 4, 00:11:04.103 "num_base_bdevs_discovered": 1, 00:11:04.103 "num_base_bdevs_operational": 4, 00:11:04.103 "base_bdevs_list": [ 00:11:04.103 { 00:11:04.103 "name": "BaseBdev1", 00:11:04.103 "uuid": "70ac2e9c-0e88-4770-bd1b-8626d50063df", 00:11:04.103 "is_configured": true, 00:11:04.103 "data_offset": 2048, 00:11:04.103 "data_size": 63488 00:11:04.103 }, 00:11:04.103 { 00:11:04.103 "name": "BaseBdev2", 00:11:04.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.103 "is_configured": false, 00:11:04.103 "data_offset": 0, 00:11:04.103 "data_size": 0 00:11:04.103 }, 00:11:04.103 { 00:11:04.103 "name": "BaseBdev3", 00:11:04.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.103 "is_configured": false, 00:11:04.103 "data_offset": 0, 00:11:04.103 "data_size": 0 00:11:04.103 }, 00:11:04.103 { 00:11:04.103 "name": "BaseBdev4", 00:11:04.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.103 "is_configured": false, 00:11:04.103 "data_offset": 0, 00:11:04.103 "data_size": 0 00:11:04.103 } 00:11:04.103 ] 00:11:04.103 }' 00:11:04.103 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.103 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.364 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:04.364 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.364 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.364 [2024-11-18 03:11:07.843121] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:04.364 [2024-11-18 03:11:07.843240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:11:04.364 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.364 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:04.364 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.364 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.364 [2024-11-18 03:11:07.855145] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:04.364 [2024-11-18 03:11:07.857131] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:04.364 [2024-11-18 03:11:07.857178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:04.364 [2024-11-18 03:11:07.857189] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:04.364 [2024-11-18 03:11:07.857199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:04.364 [2024-11-18 03:11:07.857206] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:04.364 [2024-11-18 03:11:07.857215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:04.364 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.364 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:04.364 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:04.364 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:04.364 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.364 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.364 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.364 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.364 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.364 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.364 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.364 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.364 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.364 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.364 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.364 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.364 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.364 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.364 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.364 "name": "Existed_Raid", 00:11:04.364 "uuid": "3c24b7f8-aa2a-4d0d-a5c4-af8cbec2d493", 00:11:04.364 "strip_size_kb": 0, 00:11:04.364 "state": "configuring", 00:11:04.364 "raid_level": "raid1", 00:11:04.364 "superblock": true, 00:11:04.364 "num_base_bdevs": 4, 00:11:04.364 "num_base_bdevs_discovered": 1, 00:11:04.364 "num_base_bdevs_operational": 4, 00:11:04.364 "base_bdevs_list": [ 00:11:04.364 { 00:11:04.364 "name": "BaseBdev1", 00:11:04.364 "uuid": "70ac2e9c-0e88-4770-bd1b-8626d50063df", 00:11:04.364 "is_configured": true, 00:11:04.364 "data_offset": 2048, 00:11:04.364 "data_size": 63488 00:11:04.364 }, 00:11:04.364 { 00:11:04.364 "name": "BaseBdev2", 00:11:04.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.364 "is_configured": false, 00:11:04.364 "data_offset": 0, 00:11:04.364 "data_size": 0 00:11:04.364 }, 00:11:04.364 { 00:11:04.364 "name": "BaseBdev3", 00:11:04.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.364 "is_configured": false, 00:11:04.364 "data_offset": 0, 00:11:04.364 "data_size": 0 00:11:04.364 }, 00:11:04.364 { 00:11:04.364 "name": "BaseBdev4", 00:11:04.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.364 "is_configured": false, 00:11:04.364 "data_offset": 0, 00:11:04.364 "data_size": 0 00:11:04.364 } 00:11:04.364 ] 00:11:04.364 }' 00:11:04.364 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.364 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.935 BaseBdev2 00:11:04.935 [2024-11-18 03:11:08.335600] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.935 [ 00:11:04.935 { 00:11:04.935 "name": "BaseBdev2", 00:11:04.935 "aliases": [ 00:11:04.935 "abd5689a-4a92-4b9e-9b8a-887bb3a99086" 00:11:04.935 ], 00:11:04.935 "product_name": "Malloc disk", 00:11:04.935 "block_size": 512, 00:11:04.935 "num_blocks": 65536, 00:11:04.935 "uuid": "abd5689a-4a92-4b9e-9b8a-887bb3a99086", 00:11:04.935 "assigned_rate_limits": { 00:11:04.935 "rw_ios_per_sec": 0, 00:11:04.935 "rw_mbytes_per_sec": 0, 00:11:04.935 "r_mbytes_per_sec": 0, 00:11:04.935 "w_mbytes_per_sec": 0 00:11:04.935 }, 00:11:04.935 "claimed": true, 00:11:04.935 "claim_type": "exclusive_write", 00:11:04.935 "zoned": false, 00:11:04.935 "supported_io_types": { 00:11:04.935 "read": true, 00:11:04.935 "write": true, 00:11:04.935 "unmap": true, 00:11:04.935 "flush": true, 00:11:04.935 "reset": true, 00:11:04.935 "nvme_admin": false, 00:11:04.935 "nvme_io": false, 00:11:04.935 "nvme_io_md": false, 00:11:04.935 "write_zeroes": true, 00:11:04.935 "zcopy": true, 00:11:04.935 "get_zone_info": false, 00:11:04.935 "zone_management": false, 00:11:04.935 "zone_append": false, 00:11:04.935 "compare": false, 00:11:04.935 "compare_and_write": false, 00:11:04.935 "abort": true, 00:11:04.935 "seek_hole": false, 00:11:04.935 "seek_data": false, 00:11:04.935 "copy": true, 00:11:04.935 "nvme_iov_md": false 00:11:04.935 }, 00:11:04.935 "memory_domains": [ 00:11:04.935 { 00:11:04.935 "dma_device_id": "system", 00:11:04.935 "dma_device_type": 1 00:11:04.935 }, 00:11:04.935 { 00:11:04.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.935 "dma_device_type": 2 00:11:04.935 } 00:11:04.935 ], 00:11:04.935 "driver_specific": {} 00:11:04.935 } 00:11:04.935 ] 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.935 "name": "Existed_Raid", 00:11:04.935 "uuid": "3c24b7f8-aa2a-4d0d-a5c4-af8cbec2d493", 00:11:04.935 "strip_size_kb": 0, 00:11:04.935 "state": "configuring", 00:11:04.935 "raid_level": "raid1", 00:11:04.935 "superblock": true, 00:11:04.935 "num_base_bdevs": 4, 00:11:04.935 "num_base_bdevs_discovered": 2, 00:11:04.935 "num_base_bdevs_operational": 4, 00:11:04.935 "base_bdevs_list": [ 00:11:04.935 { 00:11:04.935 "name": "BaseBdev1", 00:11:04.935 "uuid": "70ac2e9c-0e88-4770-bd1b-8626d50063df", 00:11:04.935 "is_configured": true, 00:11:04.935 "data_offset": 2048, 00:11:04.935 "data_size": 63488 00:11:04.935 }, 00:11:04.935 { 00:11:04.935 "name": "BaseBdev2", 00:11:04.935 "uuid": "abd5689a-4a92-4b9e-9b8a-887bb3a99086", 00:11:04.935 "is_configured": true, 00:11:04.935 "data_offset": 2048, 00:11:04.935 "data_size": 63488 00:11:04.935 }, 00:11:04.935 { 00:11:04.935 "name": "BaseBdev3", 00:11:04.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.935 "is_configured": false, 00:11:04.935 "data_offset": 0, 00:11:04.935 "data_size": 0 00:11:04.935 }, 00:11:04.935 { 00:11:04.935 "name": "BaseBdev4", 00:11:04.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.935 "is_configured": false, 00:11:04.935 "data_offset": 0, 00:11:04.935 "data_size": 0 00:11:04.935 } 00:11:04.935 ] 00:11:04.935 }' 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.935 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.506 [2024-11-18 03:11:08.830159] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:05.506 BaseBdev3 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.506 [ 00:11:05.506 { 00:11:05.506 "name": "BaseBdev3", 00:11:05.506 "aliases": [ 00:11:05.506 "02ebc9a3-98c9-4812-a5a5-97a2ec6ab8ff" 00:11:05.506 ], 00:11:05.506 "product_name": "Malloc disk", 00:11:05.506 "block_size": 512, 00:11:05.506 "num_blocks": 65536, 00:11:05.506 "uuid": "02ebc9a3-98c9-4812-a5a5-97a2ec6ab8ff", 00:11:05.506 "assigned_rate_limits": { 00:11:05.506 "rw_ios_per_sec": 0, 00:11:05.506 "rw_mbytes_per_sec": 0, 00:11:05.506 "r_mbytes_per_sec": 0, 00:11:05.506 "w_mbytes_per_sec": 0 00:11:05.506 }, 00:11:05.506 "claimed": true, 00:11:05.506 "claim_type": "exclusive_write", 00:11:05.506 "zoned": false, 00:11:05.506 "supported_io_types": { 00:11:05.506 "read": true, 00:11:05.506 "write": true, 00:11:05.506 "unmap": true, 00:11:05.506 "flush": true, 00:11:05.506 "reset": true, 00:11:05.506 "nvme_admin": false, 00:11:05.506 "nvme_io": false, 00:11:05.506 "nvme_io_md": false, 00:11:05.506 "write_zeroes": true, 00:11:05.506 "zcopy": true, 00:11:05.506 "get_zone_info": false, 00:11:05.506 "zone_management": false, 00:11:05.506 "zone_append": false, 00:11:05.506 "compare": false, 00:11:05.506 "compare_and_write": false, 00:11:05.506 "abort": true, 00:11:05.506 "seek_hole": false, 00:11:05.506 "seek_data": false, 00:11:05.506 "copy": true, 00:11:05.506 "nvme_iov_md": false 00:11:05.506 }, 00:11:05.506 "memory_domains": [ 00:11:05.506 { 00:11:05.506 "dma_device_id": "system", 00:11:05.506 "dma_device_type": 1 00:11:05.506 }, 00:11:05.506 { 00:11:05.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.506 "dma_device_type": 2 00:11:05.506 } 00:11:05.506 ], 00:11:05.506 "driver_specific": {} 00:11:05.506 } 00:11:05.506 ] 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.506 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.507 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.507 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.507 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.507 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.507 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.507 "name": "Existed_Raid", 00:11:05.507 "uuid": "3c24b7f8-aa2a-4d0d-a5c4-af8cbec2d493", 00:11:05.507 "strip_size_kb": 0, 00:11:05.507 "state": "configuring", 00:11:05.507 "raid_level": "raid1", 00:11:05.507 "superblock": true, 00:11:05.507 "num_base_bdevs": 4, 00:11:05.507 "num_base_bdevs_discovered": 3, 00:11:05.507 "num_base_bdevs_operational": 4, 00:11:05.507 "base_bdevs_list": [ 00:11:05.507 { 00:11:05.507 "name": "BaseBdev1", 00:11:05.507 "uuid": "70ac2e9c-0e88-4770-bd1b-8626d50063df", 00:11:05.507 "is_configured": true, 00:11:05.507 "data_offset": 2048, 00:11:05.507 "data_size": 63488 00:11:05.507 }, 00:11:05.507 { 00:11:05.507 "name": "BaseBdev2", 00:11:05.507 "uuid": "abd5689a-4a92-4b9e-9b8a-887bb3a99086", 00:11:05.507 "is_configured": true, 00:11:05.507 "data_offset": 2048, 00:11:05.507 "data_size": 63488 00:11:05.507 }, 00:11:05.507 { 00:11:05.507 "name": "BaseBdev3", 00:11:05.507 "uuid": "02ebc9a3-98c9-4812-a5a5-97a2ec6ab8ff", 00:11:05.507 "is_configured": true, 00:11:05.507 "data_offset": 2048, 00:11:05.507 "data_size": 63488 00:11:05.507 }, 00:11:05.507 { 00:11:05.507 "name": "BaseBdev4", 00:11:05.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.507 "is_configured": false, 00:11:05.507 "data_offset": 0, 00:11:05.507 "data_size": 0 00:11:05.507 } 00:11:05.507 ] 00:11:05.507 }' 00:11:05.507 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.507 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.766 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:05.766 03:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.766 03:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.766 [2024-11-18 03:11:09.304618] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:05.766 [2024-11-18 03:11:09.304949] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:05.766 [2024-11-18 03:11:09.304987] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:05.766 [2024-11-18 03:11:09.305302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:05.766 BaseBdev4 00:11:05.766 [2024-11-18 03:11:09.305451] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:05.766 [2024-11-18 03:11:09.305466] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:11:05.766 [2024-11-18 03:11:09.305616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.766 03:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.766 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:05.766 03:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:05.766 03:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:05.766 03:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:05.766 03:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:05.766 03:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:05.766 03:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:05.766 03:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.766 03:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.766 03:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.766 03:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:05.766 03:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.766 03:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.766 [ 00:11:05.766 { 00:11:05.766 "name": "BaseBdev4", 00:11:05.766 "aliases": [ 00:11:05.766 "362d2c29-acb8-4e0d-a799-37a9a0c1f2c5" 00:11:05.766 ], 00:11:05.766 "product_name": "Malloc disk", 00:11:05.766 "block_size": 512, 00:11:05.766 "num_blocks": 65536, 00:11:05.766 "uuid": "362d2c29-acb8-4e0d-a799-37a9a0c1f2c5", 00:11:05.766 "assigned_rate_limits": { 00:11:05.766 "rw_ios_per_sec": 0, 00:11:05.766 "rw_mbytes_per_sec": 0, 00:11:05.766 "r_mbytes_per_sec": 0, 00:11:05.766 "w_mbytes_per_sec": 0 00:11:05.766 }, 00:11:05.766 "claimed": true, 00:11:05.766 "claim_type": "exclusive_write", 00:11:05.766 "zoned": false, 00:11:05.766 "supported_io_types": { 00:11:05.766 "read": true, 00:11:05.766 "write": true, 00:11:05.766 "unmap": true, 00:11:05.766 "flush": true, 00:11:05.766 "reset": true, 00:11:05.766 "nvme_admin": false, 00:11:05.766 "nvme_io": false, 00:11:05.766 "nvme_io_md": false, 00:11:05.766 "write_zeroes": true, 00:11:05.766 "zcopy": true, 00:11:05.766 "get_zone_info": false, 00:11:05.766 "zone_management": false, 00:11:05.766 "zone_append": false, 00:11:05.766 "compare": false, 00:11:05.766 "compare_and_write": false, 00:11:05.766 "abort": true, 00:11:05.766 "seek_hole": false, 00:11:05.766 "seek_data": false, 00:11:05.766 "copy": true, 00:11:05.766 "nvme_iov_md": false 00:11:05.766 }, 00:11:05.766 "memory_domains": [ 00:11:05.766 { 00:11:05.766 "dma_device_id": "system", 00:11:05.766 "dma_device_type": 1 00:11:05.766 }, 00:11:05.766 { 00:11:05.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.766 "dma_device_type": 2 00:11:05.766 } 00:11:05.766 ], 00:11:05.766 "driver_specific": {} 00:11:05.766 } 00:11:05.766 ] 00:11:06.026 03:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.026 03:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:06.026 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:06.026 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:06.026 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:06.026 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.026 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.026 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.026 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.026 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.026 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.026 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.026 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.026 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.026 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.026 03:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.026 03:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.026 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.026 03:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.026 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.026 "name": "Existed_Raid", 00:11:06.026 "uuid": "3c24b7f8-aa2a-4d0d-a5c4-af8cbec2d493", 00:11:06.026 "strip_size_kb": 0, 00:11:06.026 "state": "online", 00:11:06.026 "raid_level": "raid1", 00:11:06.026 "superblock": true, 00:11:06.026 "num_base_bdevs": 4, 00:11:06.026 "num_base_bdevs_discovered": 4, 00:11:06.026 "num_base_bdevs_operational": 4, 00:11:06.026 "base_bdevs_list": [ 00:11:06.026 { 00:11:06.026 "name": "BaseBdev1", 00:11:06.026 "uuid": "70ac2e9c-0e88-4770-bd1b-8626d50063df", 00:11:06.026 "is_configured": true, 00:11:06.026 "data_offset": 2048, 00:11:06.026 "data_size": 63488 00:11:06.026 }, 00:11:06.026 { 00:11:06.026 "name": "BaseBdev2", 00:11:06.026 "uuid": "abd5689a-4a92-4b9e-9b8a-887bb3a99086", 00:11:06.026 "is_configured": true, 00:11:06.026 "data_offset": 2048, 00:11:06.026 "data_size": 63488 00:11:06.026 }, 00:11:06.026 { 00:11:06.026 "name": "BaseBdev3", 00:11:06.026 "uuid": "02ebc9a3-98c9-4812-a5a5-97a2ec6ab8ff", 00:11:06.026 "is_configured": true, 00:11:06.026 "data_offset": 2048, 00:11:06.026 "data_size": 63488 00:11:06.026 }, 00:11:06.026 { 00:11:06.026 "name": "BaseBdev4", 00:11:06.026 "uuid": "362d2c29-acb8-4e0d-a799-37a9a0c1f2c5", 00:11:06.026 "is_configured": true, 00:11:06.026 "data_offset": 2048, 00:11:06.026 "data_size": 63488 00:11:06.026 } 00:11:06.026 ] 00:11:06.026 }' 00:11:06.027 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.027 03:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.287 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:06.287 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:06.287 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:06.287 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:06.287 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:06.287 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:06.287 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:06.287 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:06.287 03:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.287 03:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.287 [2024-11-18 03:11:09.776229] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:06.287 03:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.287 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:06.287 "name": "Existed_Raid", 00:11:06.287 "aliases": [ 00:11:06.287 "3c24b7f8-aa2a-4d0d-a5c4-af8cbec2d493" 00:11:06.287 ], 00:11:06.287 "product_name": "Raid Volume", 00:11:06.287 "block_size": 512, 00:11:06.287 "num_blocks": 63488, 00:11:06.287 "uuid": "3c24b7f8-aa2a-4d0d-a5c4-af8cbec2d493", 00:11:06.287 "assigned_rate_limits": { 00:11:06.287 "rw_ios_per_sec": 0, 00:11:06.287 "rw_mbytes_per_sec": 0, 00:11:06.287 "r_mbytes_per_sec": 0, 00:11:06.287 "w_mbytes_per_sec": 0 00:11:06.287 }, 00:11:06.287 "claimed": false, 00:11:06.287 "zoned": false, 00:11:06.287 "supported_io_types": { 00:11:06.287 "read": true, 00:11:06.287 "write": true, 00:11:06.287 "unmap": false, 00:11:06.287 "flush": false, 00:11:06.287 "reset": true, 00:11:06.287 "nvme_admin": false, 00:11:06.287 "nvme_io": false, 00:11:06.287 "nvme_io_md": false, 00:11:06.287 "write_zeroes": true, 00:11:06.287 "zcopy": false, 00:11:06.287 "get_zone_info": false, 00:11:06.287 "zone_management": false, 00:11:06.287 "zone_append": false, 00:11:06.287 "compare": false, 00:11:06.287 "compare_and_write": false, 00:11:06.287 "abort": false, 00:11:06.287 "seek_hole": false, 00:11:06.287 "seek_data": false, 00:11:06.287 "copy": false, 00:11:06.287 "nvme_iov_md": false 00:11:06.287 }, 00:11:06.287 "memory_domains": [ 00:11:06.287 { 00:11:06.287 "dma_device_id": "system", 00:11:06.287 "dma_device_type": 1 00:11:06.287 }, 00:11:06.287 { 00:11:06.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.287 "dma_device_type": 2 00:11:06.287 }, 00:11:06.287 { 00:11:06.287 "dma_device_id": "system", 00:11:06.287 "dma_device_type": 1 00:11:06.287 }, 00:11:06.287 { 00:11:06.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.287 "dma_device_type": 2 00:11:06.287 }, 00:11:06.287 { 00:11:06.287 "dma_device_id": "system", 00:11:06.287 "dma_device_type": 1 00:11:06.287 }, 00:11:06.287 { 00:11:06.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.287 "dma_device_type": 2 00:11:06.287 }, 00:11:06.287 { 00:11:06.287 "dma_device_id": "system", 00:11:06.287 "dma_device_type": 1 00:11:06.287 }, 00:11:06.287 { 00:11:06.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.287 "dma_device_type": 2 00:11:06.287 } 00:11:06.287 ], 00:11:06.287 "driver_specific": { 00:11:06.287 "raid": { 00:11:06.287 "uuid": "3c24b7f8-aa2a-4d0d-a5c4-af8cbec2d493", 00:11:06.287 "strip_size_kb": 0, 00:11:06.287 "state": "online", 00:11:06.287 "raid_level": "raid1", 00:11:06.287 "superblock": true, 00:11:06.287 "num_base_bdevs": 4, 00:11:06.287 "num_base_bdevs_discovered": 4, 00:11:06.287 "num_base_bdevs_operational": 4, 00:11:06.287 "base_bdevs_list": [ 00:11:06.287 { 00:11:06.287 "name": "BaseBdev1", 00:11:06.287 "uuid": "70ac2e9c-0e88-4770-bd1b-8626d50063df", 00:11:06.287 "is_configured": true, 00:11:06.287 "data_offset": 2048, 00:11:06.287 "data_size": 63488 00:11:06.287 }, 00:11:06.287 { 00:11:06.287 "name": "BaseBdev2", 00:11:06.287 "uuid": "abd5689a-4a92-4b9e-9b8a-887bb3a99086", 00:11:06.287 "is_configured": true, 00:11:06.287 "data_offset": 2048, 00:11:06.287 "data_size": 63488 00:11:06.287 }, 00:11:06.287 { 00:11:06.287 "name": "BaseBdev3", 00:11:06.287 "uuid": "02ebc9a3-98c9-4812-a5a5-97a2ec6ab8ff", 00:11:06.287 "is_configured": true, 00:11:06.287 "data_offset": 2048, 00:11:06.287 "data_size": 63488 00:11:06.287 }, 00:11:06.287 { 00:11:06.287 "name": "BaseBdev4", 00:11:06.287 "uuid": "362d2c29-acb8-4e0d-a799-37a9a0c1f2c5", 00:11:06.287 "is_configured": true, 00:11:06.287 "data_offset": 2048, 00:11:06.287 "data_size": 63488 00:11:06.287 } 00:11:06.287 ] 00:11:06.287 } 00:11:06.287 } 00:11:06.287 }' 00:11:06.287 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:06.287 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:06.287 BaseBdev2 00:11:06.287 BaseBdev3 00:11:06.287 BaseBdev4' 00:11:06.287 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.548 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:06.548 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.548 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:06.548 03:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.548 03:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.548 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.548 03:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.548 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.548 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.548 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.548 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:06.548 03:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.548 03:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.548 03:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.548 03:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.548 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.548 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.548 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.548 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.548 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:06.548 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.548 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.548 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.548 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.548 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.548 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.548 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:06.548 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.548 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.548 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.548 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.548 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.548 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.548 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:06.548 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.548 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.548 [2024-11-18 03:11:10.107400] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:06.808 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.808 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:06.808 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:06.808 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:06.808 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:06.808 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:06.808 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:06.808 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.808 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.809 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.809 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.809 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:06.809 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.809 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.809 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.809 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.809 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.809 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.809 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.809 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.809 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.809 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.809 "name": "Existed_Raid", 00:11:06.809 "uuid": "3c24b7f8-aa2a-4d0d-a5c4-af8cbec2d493", 00:11:06.809 "strip_size_kb": 0, 00:11:06.809 "state": "online", 00:11:06.809 "raid_level": "raid1", 00:11:06.809 "superblock": true, 00:11:06.809 "num_base_bdevs": 4, 00:11:06.809 "num_base_bdevs_discovered": 3, 00:11:06.809 "num_base_bdevs_operational": 3, 00:11:06.809 "base_bdevs_list": [ 00:11:06.809 { 00:11:06.809 "name": null, 00:11:06.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.809 "is_configured": false, 00:11:06.809 "data_offset": 0, 00:11:06.809 "data_size": 63488 00:11:06.809 }, 00:11:06.809 { 00:11:06.809 "name": "BaseBdev2", 00:11:06.809 "uuid": "abd5689a-4a92-4b9e-9b8a-887bb3a99086", 00:11:06.809 "is_configured": true, 00:11:06.809 "data_offset": 2048, 00:11:06.809 "data_size": 63488 00:11:06.809 }, 00:11:06.809 { 00:11:06.809 "name": "BaseBdev3", 00:11:06.809 "uuid": "02ebc9a3-98c9-4812-a5a5-97a2ec6ab8ff", 00:11:06.809 "is_configured": true, 00:11:06.809 "data_offset": 2048, 00:11:06.809 "data_size": 63488 00:11:06.809 }, 00:11:06.809 { 00:11:06.809 "name": "BaseBdev4", 00:11:06.809 "uuid": "362d2c29-acb8-4e0d-a799-37a9a0c1f2c5", 00:11:06.809 "is_configured": true, 00:11:06.809 "data_offset": 2048, 00:11:06.809 "data_size": 63488 00:11:06.809 } 00:11:06.809 ] 00:11:06.809 }' 00:11:06.809 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.809 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.069 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:07.069 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:07.069 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:07.069 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.069 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.069 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.069 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.069 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:07.069 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:07.069 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:07.069 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.069 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.069 [2024-11-18 03:11:10.590427] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:07.069 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.069 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:07.069 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:07.069 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:07.069 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.069 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.069 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.069 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.069 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:07.069 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:07.069 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:07.069 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.069 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.330 [2024-11-18 03:11:10.645971] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:07.330 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.330 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:07.330 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:07.330 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:07.330 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.330 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.330 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.330 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.330 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:07.330 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:07.330 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:07.330 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.330 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.330 [2024-11-18 03:11:10.713499] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:07.330 [2024-11-18 03:11:10.713665] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:07.330 [2024-11-18 03:11:10.725610] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.330 [2024-11-18 03:11:10.725752] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:07.330 [2024-11-18 03:11:10.725799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:11:07.330 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.330 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:07.330 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:07.330 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.330 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:07.330 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.330 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.330 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.330 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:07.330 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:07.330 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:07.330 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:07.330 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:07.330 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.331 BaseBdev2 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.331 [ 00:11:07.331 { 00:11:07.331 "name": "BaseBdev2", 00:11:07.331 "aliases": [ 00:11:07.331 "624357ab-208e-4897-9c2c-236057bcd1a6" 00:11:07.331 ], 00:11:07.331 "product_name": "Malloc disk", 00:11:07.331 "block_size": 512, 00:11:07.331 "num_blocks": 65536, 00:11:07.331 "uuid": "624357ab-208e-4897-9c2c-236057bcd1a6", 00:11:07.331 "assigned_rate_limits": { 00:11:07.331 "rw_ios_per_sec": 0, 00:11:07.331 "rw_mbytes_per_sec": 0, 00:11:07.331 "r_mbytes_per_sec": 0, 00:11:07.331 "w_mbytes_per_sec": 0 00:11:07.331 }, 00:11:07.331 "claimed": false, 00:11:07.331 "zoned": false, 00:11:07.331 "supported_io_types": { 00:11:07.331 "read": true, 00:11:07.331 "write": true, 00:11:07.331 "unmap": true, 00:11:07.331 "flush": true, 00:11:07.331 "reset": true, 00:11:07.331 "nvme_admin": false, 00:11:07.331 "nvme_io": false, 00:11:07.331 "nvme_io_md": false, 00:11:07.331 "write_zeroes": true, 00:11:07.331 "zcopy": true, 00:11:07.331 "get_zone_info": false, 00:11:07.331 "zone_management": false, 00:11:07.331 "zone_append": false, 00:11:07.331 "compare": false, 00:11:07.331 "compare_and_write": false, 00:11:07.331 "abort": true, 00:11:07.331 "seek_hole": false, 00:11:07.331 "seek_data": false, 00:11:07.331 "copy": true, 00:11:07.331 "nvme_iov_md": false 00:11:07.331 }, 00:11:07.331 "memory_domains": [ 00:11:07.331 { 00:11:07.331 "dma_device_id": "system", 00:11:07.331 "dma_device_type": 1 00:11:07.331 }, 00:11:07.331 { 00:11:07.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.331 "dma_device_type": 2 00:11:07.331 } 00:11:07.331 ], 00:11:07.331 "driver_specific": {} 00:11:07.331 } 00:11:07.331 ] 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.331 BaseBdev3 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.331 [ 00:11:07.331 { 00:11:07.331 "name": "BaseBdev3", 00:11:07.331 "aliases": [ 00:11:07.331 "b4ba5755-cdfb-49d0-a1fe-822844fa33b2" 00:11:07.331 ], 00:11:07.331 "product_name": "Malloc disk", 00:11:07.331 "block_size": 512, 00:11:07.331 "num_blocks": 65536, 00:11:07.331 "uuid": "b4ba5755-cdfb-49d0-a1fe-822844fa33b2", 00:11:07.331 "assigned_rate_limits": { 00:11:07.331 "rw_ios_per_sec": 0, 00:11:07.331 "rw_mbytes_per_sec": 0, 00:11:07.331 "r_mbytes_per_sec": 0, 00:11:07.331 "w_mbytes_per_sec": 0 00:11:07.331 }, 00:11:07.331 "claimed": false, 00:11:07.331 "zoned": false, 00:11:07.331 "supported_io_types": { 00:11:07.331 "read": true, 00:11:07.331 "write": true, 00:11:07.331 "unmap": true, 00:11:07.331 "flush": true, 00:11:07.331 "reset": true, 00:11:07.331 "nvme_admin": false, 00:11:07.331 "nvme_io": false, 00:11:07.331 "nvme_io_md": false, 00:11:07.331 "write_zeroes": true, 00:11:07.331 "zcopy": true, 00:11:07.331 "get_zone_info": false, 00:11:07.331 "zone_management": false, 00:11:07.331 "zone_append": false, 00:11:07.331 "compare": false, 00:11:07.331 "compare_and_write": false, 00:11:07.331 "abort": true, 00:11:07.331 "seek_hole": false, 00:11:07.331 "seek_data": false, 00:11:07.331 "copy": true, 00:11:07.331 "nvme_iov_md": false 00:11:07.331 }, 00:11:07.331 "memory_domains": [ 00:11:07.331 { 00:11:07.331 "dma_device_id": "system", 00:11:07.331 "dma_device_type": 1 00:11:07.331 }, 00:11:07.331 { 00:11:07.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.331 "dma_device_type": 2 00:11:07.331 } 00:11:07.331 ], 00:11:07.331 "driver_specific": {} 00:11:07.331 } 00:11:07.331 ] 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.331 BaseBdev4 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.331 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.592 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.592 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:07.592 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.592 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.592 [ 00:11:07.592 { 00:11:07.592 "name": "BaseBdev4", 00:11:07.592 "aliases": [ 00:11:07.592 "86c40e38-4895-4858-b49f-db8b5f6f70a3" 00:11:07.592 ], 00:11:07.592 "product_name": "Malloc disk", 00:11:07.592 "block_size": 512, 00:11:07.592 "num_blocks": 65536, 00:11:07.592 "uuid": "86c40e38-4895-4858-b49f-db8b5f6f70a3", 00:11:07.592 "assigned_rate_limits": { 00:11:07.592 "rw_ios_per_sec": 0, 00:11:07.592 "rw_mbytes_per_sec": 0, 00:11:07.592 "r_mbytes_per_sec": 0, 00:11:07.592 "w_mbytes_per_sec": 0 00:11:07.592 }, 00:11:07.592 "claimed": false, 00:11:07.592 "zoned": false, 00:11:07.592 "supported_io_types": { 00:11:07.592 "read": true, 00:11:07.592 "write": true, 00:11:07.592 "unmap": true, 00:11:07.592 "flush": true, 00:11:07.592 "reset": true, 00:11:07.592 "nvme_admin": false, 00:11:07.592 "nvme_io": false, 00:11:07.592 "nvme_io_md": false, 00:11:07.592 "write_zeroes": true, 00:11:07.592 "zcopy": true, 00:11:07.592 "get_zone_info": false, 00:11:07.592 "zone_management": false, 00:11:07.592 "zone_append": false, 00:11:07.592 "compare": false, 00:11:07.592 "compare_and_write": false, 00:11:07.592 "abort": true, 00:11:07.592 "seek_hole": false, 00:11:07.592 "seek_data": false, 00:11:07.592 "copy": true, 00:11:07.592 "nvme_iov_md": false 00:11:07.592 }, 00:11:07.592 "memory_domains": [ 00:11:07.592 { 00:11:07.592 "dma_device_id": "system", 00:11:07.592 "dma_device_type": 1 00:11:07.592 }, 00:11:07.592 { 00:11:07.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.592 "dma_device_type": 2 00:11:07.592 } 00:11:07.592 ], 00:11:07.592 "driver_specific": {} 00:11:07.592 } 00:11:07.592 ] 00:11:07.592 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.592 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:07.592 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:07.592 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:07.592 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:07.592 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.592 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.592 [2024-11-18 03:11:10.947778] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:07.592 [2024-11-18 03:11:10.947899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:07.592 [2024-11-18 03:11:10.947949] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:07.592 [2024-11-18 03:11:10.949966] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:07.592 [2024-11-18 03:11:10.950065] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:07.592 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.592 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:07.592 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.592 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.592 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.592 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.592 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.592 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.592 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.592 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.592 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.593 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.593 03:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.593 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.593 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.593 03:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.593 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.593 "name": "Existed_Raid", 00:11:07.593 "uuid": "960dbe88-7620-44d2-a0dd-50e58822adf9", 00:11:07.593 "strip_size_kb": 0, 00:11:07.593 "state": "configuring", 00:11:07.593 "raid_level": "raid1", 00:11:07.593 "superblock": true, 00:11:07.593 "num_base_bdevs": 4, 00:11:07.593 "num_base_bdevs_discovered": 3, 00:11:07.593 "num_base_bdevs_operational": 4, 00:11:07.593 "base_bdevs_list": [ 00:11:07.593 { 00:11:07.593 "name": "BaseBdev1", 00:11:07.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.593 "is_configured": false, 00:11:07.593 "data_offset": 0, 00:11:07.593 "data_size": 0 00:11:07.593 }, 00:11:07.593 { 00:11:07.593 "name": "BaseBdev2", 00:11:07.593 "uuid": "624357ab-208e-4897-9c2c-236057bcd1a6", 00:11:07.593 "is_configured": true, 00:11:07.593 "data_offset": 2048, 00:11:07.593 "data_size": 63488 00:11:07.593 }, 00:11:07.593 { 00:11:07.593 "name": "BaseBdev3", 00:11:07.593 "uuid": "b4ba5755-cdfb-49d0-a1fe-822844fa33b2", 00:11:07.593 "is_configured": true, 00:11:07.593 "data_offset": 2048, 00:11:07.593 "data_size": 63488 00:11:07.593 }, 00:11:07.593 { 00:11:07.593 "name": "BaseBdev4", 00:11:07.593 "uuid": "86c40e38-4895-4858-b49f-db8b5f6f70a3", 00:11:07.593 "is_configured": true, 00:11:07.593 "data_offset": 2048, 00:11:07.593 "data_size": 63488 00:11:07.593 } 00:11:07.593 ] 00:11:07.593 }' 00:11:07.593 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.593 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.853 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:07.853 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.853 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.853 [2024-11-18 03:11:11.387036] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:07.853 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.853 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:07.853 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.853 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.853 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.853 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.853 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.853 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.853 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.853 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.853 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.853 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.853 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.853 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.853 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.853 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.114 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.114 "name": "Existed_Raid", 00:11:08.114 "uuid": "960dbe88-7620-44d2-a0dd-50e58822adf9", 00:11:08.114 "strip_size_kb": 0, 00:11:08.114 "state": "configuring", 00:11:08.114 "raid_level": "raid1", 00:11:08.114 "superblock": true, 00:11:08.114 "num_base_bdevs": 4, 00:11:08.114 "num_base_bdevs_discovered": 2, 00:11:08.114 "num_base_bdevs_operational": 4, 00:11:08.114 "base_bdevs_list": [ 00:11:08.114 { 00:11:08.114 "name": "BaseBdev1", 00:11:08.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.114 "is_configured": false, 00:11:08.114 "data_offset": 0, 00:11:08.114 "data_size": 0 00:11:08.114 }, 00:11:08.114 { 00:11:08.114 "name": null, 00:11:08.114 "uuid": "624357ab-208e-4897-9c2c-236057bcd1a6", 00:11:08.114 "is_configured": false, 00:11:08.114 "data_offset": 0, 00:11:08.114 "data_size": 63488 00:11:08.114 }, 00:11:08.114 { 00:11:08.114 "name": "BaseBdev3", 00:11:08.114 "uuid": "b4ba5755-cdfb-49d0-a1fe-822844fa33b2", 00:11:08.114 "is_configured": true, 00:11:08.114 "data_offset": 2048, 00:11:08.114 "data_size": 63488 00:11:08.114 }, 00:11:08.114 { 00:11:08.114 "name": "BaseBdev4", 00:11:08.114 "uuid": "86c40e38-4895-4858-b49f-db8b5f6f70a3", 00:11:08.114 "is_configured": true, 00:11:08.114 "data_offset": 2048, 00:11:08.114 "data_size": 63488 00:11:08.114 } 00:11:08.114 ] 00:11:08.114 }' 00:11:08.114 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.114 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.374 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:08.374 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.374 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.374 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.374 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.374 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:08.374 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:08.374 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.374 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.374 [2024-11-18 03:11:11.869297] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:08.374 BaseBdev1 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.375 [ 00:11:08.375 { 00:11:08.375 "name": "BaseBdev1", 00:11:08.375 "aliases": [ 00:11:08.375 "95f35266-127d-493a-bd11-5031e2271d15" 00:11:08.375 ], 00:11:08.375 "product_name": "Malloc disk", 00:11:08.375 "block_size": 512, 00:11:08.375 "num_blocks": 65536, 00:11:08.375 "uuid": "95f35266-127d-493a-bd11-5031e2271d15", 00:11:08.375 "assigned_rate_limits": { 00:11:08.375 "rw_ios_per_sec": 0, 00:11:08.375 "rw_mbytes_per_sec": 0, 00:11:08.375 "r_mbytes_per_sec": 0, 00:11:08.375 "w_mbytes_per_sec": 0 00:11:08.375 }, 00:11:08.375 "claimed": true, 00:11:08.375 "claim_type": "exclusive_write", 00:11:08.375 "zoned": false, 00:11:08.375 "supported_io_types": { 00:11:08.375 "read": true, 00:11:08.375 "write": true, 00:11:08.375 "unmap": true, 00:11:08.375 "flush": true, 00:11:08.375 "reset": true, 00:11:08.375 "nvme_admin": false, 00:11:08.375 "nvme_io": false, 00:11:08.375 "nvme_io_md": false, 00:11:08.375 "write_zeroes": true, 00:11:08.375 "zcopy": true, 00:11:08.375 "get_zone_info": false, 00:11:08.375 "zone_management": false, 00:11:08.375 "zone_append": false, 00:11:08.375 "compare": false, 00:11:08.375 "compare_and_write": false, 00:11:08.375 "abort": true, 00:11:08.375 "seek_hole": false, 00:11:08.375 "seek_data": false, 00:11:08.375 "copy": true, 00:11:08.375 "nvme_iov_md": false 00:11:08.375 }, 00:11:08.375 "memory_domains": [ 00:11:08.375 { 00:11:08.375 "dma_device_id": "system", 00:11:08.375 "dma_device_type": 1 00:11:08.375 }, 00:11:08.375 { 00:11:08.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.375 "dma_device_type": 2 00:11:08.375 } 00:11:08.375 ], 00:11:08.375 "driver_specific": {} 00:11:08.375 } 00:11:08.375 ] 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.375 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.635 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.635 "name": "Existed_Raid", 00:11:08.635 "uuid": "960dbe88-7620-44d2-a0dd-50e58822adf9", 00:11:08.635 "strip_size_kb": 0, 00:11:08.635 "state": "configuring", 00:11:08.635 "raid_level": "raid1", 00:11:08.635 "superblock": true, 00:11:08.635 "num_base_bdevs": 4, 00:11:08.635 "num_base_bdevs_discovered": 3, 00:11:08.635 "num_base_bdevs_operational": 4, 00:11:08.635 "base_bdevs_list": [ 00:11:08.635 { 00:11:08.635 "name": "BaseBdev1", 00:11:08.635 "uuid": "95f35266-127d-493a-bd11-5031e2271d15", 00:11:08.635 "is_configured": true, 00:11:08.635 "data_offset": 2048, 00:11:08.635 "data_size": 63488 00:11:08.635 }, 00:11:08.635 { 00:11:08.635 "name": null, 00:11:08.635 "uuid": "624357ab-208e-4897-9c2c-236057bcd1a6", 00:11:08.635 "is_configured": false, 00:11:08.635 "data_offset": 0, 00:11:08.635 "data_size": 63488 00:11:08.635 }, 00:11:08.635 { 00:11:08.635 "name": "BaseBdev3", 00:11:08.635 "uuid": "b4ba5755-cdfb-49d0-a1fe-822844fa33b2", 00:11:08.635 "is_configured": true, 00:11:08.635 "data_offset": 2048, 00:11:08.635 "data_size": 63488 00:11:08.635 }, 00:11:08.635 { 00:11:08.635 "name": "BaseBdev4", 00:11:08.635 "uuid": "86c40e38-4895-4858-b49f-db8b5f6f70a3", 00:11:08.635 "is_configured": true, 00:11:08.635 "data_offset": 2048, 00:11:08.635 "data_size": 63488 00:11:08.635 } 00:11:08.635 ] 00:11:08.635 }' 00:11:08.635 03:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.635 03:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.895 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:08.895 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.895 03:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.895 03:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.895 03:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.895 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:08.895 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:08.895 03:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.895 03:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.895 [2024-11-18 03:11:12.400483] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:08.895 03:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.895 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:08.895 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.895 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.895 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.895 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.895 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.895 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.895 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.895 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.896 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.896 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.896 03:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.896 03:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.896 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.896 03:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.896 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.896 "name": "Existed_Raid", 00:11:08.896 "uuid": "960dbe88-7620-44d2-a0dd-50e58822adf9", 00:11:08.896 "strip_size_kb": 0, 00:11:08.896 "state": "configuring", 00:11:08.896 "raid_level": "raid1", 00:11:08.896 "superblock": true, 00:11:08.896 "num_base_bdevs": 4, 00:11:08.896 "num_base_bdevs_discovered": 2, 00:11:08.896 "num_base_bdevs_operational": 4, 00:11:08.896 "base_bdevs_list": [ 00:11:08.896 { 00:11:08.896 "name": "BaseBdev1", 00:11:08.896 "uuid": "95f35266-127d-493a-bd11-5031e2271d15", 00:11:08.896 "is_configured": true, 00:11:08.896 "data_offset": 2048, 00:11:08.896 "data_size": 63488 00:11:08.896 }, 00:11:08.896 { 00:11:08.896 "name": null, 00:11:08.896 "uuid": "624357ab-208e-4897-9c2c-236057bcd1a6", 00:11:08.896 "is_configured": false, 00:11:08.896 "data_offset": 0, 00:11:08.896 "data_size": 63488 00:11:08.896 }, 00:11:08.896 { 00:11:08.896 "name": null, 00:11:08.896 "uuid": "b4ba5755-cdfb-49d0-a1fe-822844fa33b2", 00:11:08.896 "is_configured": false, 00:11:08.896 "data_offset": 0, 00:11:08.896 "data_size": 63488 00:11:08.896 }, 00:11:08.896 { 00:11:08.896 "name": "BaseBdev4", 00:11:08.896 "uuid": "86c40e38-4895-4858-b49f-db8b5f6f70a3", 00:11:08.896 "is_configured": true, 00:11:08.896 "data_offset": 2048, 00:11:08.896 "data_size": 63488 00:11:08.896 } 00:11:08.896 ] 00:11:08.896 }' 00:11:08.896 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.896 03:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.466 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:09.466 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.466 03:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.466 03:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.466 03:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.466 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:09.466 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:09.466 03:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.466 03:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.466 [2024-11-18 03:11:12.863774] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:09.466 03:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.466 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:09.466 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.466 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.466 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.466 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.466 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.466 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.466 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.466 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.466 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.466 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.466 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.466 03:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.466 03:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.466 03:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.466 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.466 "name": "Existed_Raid", 00:11:09.466 "uuid": "960dbe88-7620-44d2-a0dd-50e58822adf9", 00:11:09.466 "strip_size_kb": 0, 00:11:09.466 "state": "configuring", 00:11:09.466 "raid_level": "raid1", 00:11:09.466 "superblock": true, 00:11:09.466 "num_base_bdevs": 4, 00:11:09.466 "num_base_bdevs_discovered": 3, 00:11:09.466 "num_base_bdevs_operational": 4, 00:11:09.466 "base_bdevs_list": [ 00:11:09.466 { 00:11:09.466 "name": "BaseBdev1", 00:11:09.466 "uuid": "95f35266-127d-493a-bd11-5031e2271d15", 00:11:09.466 "is_configured": true, 00:11:09.466 "data_offset": 2048, 00:11:09.466 "data_size": 63488 00:11:09.466 }, 00:11:09.466 { 00:11:09.466 "name": null, 00:11:09.466 "uuid": "624357ab-208e-4897-9c2c-236057bcd1a6", 00:11:09.466 "is_configured": false, 00:11:09.466 "data_offset": 0, 00:11:09.466 "data_size": 63488 00:11:09.466 }, 00:11:09.466 { 00:11:09.466 "name": "BaseBdev3", 00:11:09.466 "uuid": "b4ba5755-cdfb-49d0-a1fe-822844fa33b2", 00:11:09.466 "is_configured": true, 00:11:09.466 "data_offset": 2048, 00:11:09.466 "data_size": 63488 00:11:09.466 }, 00:11:09.466 { 00:11:09.466 "name": "BaseBdev4", 00:11:09.466 "uuid": "86c40e38-4895-4858-b49f-db8b5f6f70a3", 00:11:09.466 "is_configured": true, 00:11:09.466 "data_offset": 2048, 00:11:09.466 "data_size": 63488 00:11:09.466 } 00:11:09.466 ] 00:11:09.466 }' 00:11:09.466 03:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.466 03:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.052 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:10.052 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.052 03:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.052 03:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.052 03:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.052 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:10.052 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:10.052 03:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.052 03:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.052 [2024-11-18 03:11:13.390913] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:10.052 03:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.052 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:10.052 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.052 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.052 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.052 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.052 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.052 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.052 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.052 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.052 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.052 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.052 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.052 03:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.052 03:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.052 03:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.052 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.052 "name": "Existed_Raid", 00:11:10.052 "uuid": "960dbe88-7620-44d2-a0dd-50e58822adf9", 00:11:10.052 "strip_size_kb": 0, 00:11:10.052 "state": "configuring", 00:11:10.052 "raid_level": "raid1", 00:11:10.052 "superblock": true, 00:11:10.052 "num_base_bdevs": 4, 00:11:10.052 "num_base_bdevs_discovered": 2, 00:11:10.052 "num_base_bdevs_operational": 4, 00:11:10.052 "base_bdevs_list": [ 00:11:10.052 { 00:11:10.052 "name": null, 00:11:10.052 "uuid": "95f35266-127d-493a-bd11-5031e2271d15", 00:11:10.052 "is_configured": false, 00:11:10.052 "data_offset": 0, 00:11:10.052 "data_size": 63488 00:11:10.052 }, 00:11:10.052 { 00:11:10.052 "name": null, 00:11:10.052 "uuid": "624357ab-208e-4897-9c2c-236057bcd1a6", 00:11:10.052 "is_configured": false, 00:11:10.052 "data_offset": 0, 00:11:10.052 "data_size": 63488 00:11:10.052 }, 00:11:10.052 { 00:11:10.052 "name": "BaseBdev3", 00:11:10.052 "uuid": "b4ba5755-cdfb-49d0-a1fe-822844fa33b2", 00:11:10.052 "is_configured": true, 00:11:10.052 "data_offset": 2048, 00:11:10.052 "data_size": 63488 00:11:10.052 }, 00:11:10.052 { 00:11:10.052 "name": "BaseBdev4", 00:11:10.052 "uuid": "86c40e38-4895-4858-b49f-db8b5f6f70a3", 00:11:10.052 "is_configured": true, 00:11:10.052 "data_offset": 2048, 00:11:10.052 "data_size": 63488 00:11:10.052 } 00:11:10.052 ] 00:11:10.052 }' 00:11:10.052 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.052 03:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.621 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.621 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:10.621 03:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.621 03:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.621 03:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.621 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:10.621 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:10.621 03:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.621 03:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.621 [2024-11-18 03:11:13.953132] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:10.621 03:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.621 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:10.621 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.621 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.621 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.621 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.621 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.621 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.621 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.621 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.621 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.621 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.621 03:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.621 03:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.621 03:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.621 03:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.621 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.621 "name": "Existed_Raid", 00:11:10.621 "uuid": "960dbe88-7620-44d2-a0dd-50e58822adf9", 00:11:10.621 "strip_size_kb": 0, 00:11:10.621 "state": "configuring", 00:11:10.621 "raid_level": "raid1", 00:11:10.621 "superblock": true, 00:11:10.621 "num_base_bdevs": 4, 00:11:10.621 "num_base_bdevs_discovered": 3, 00:11:10.621 "num_base_bdevs_operational": 4, 00:11:10.621 "base_bdevs_list": [ 00:11:10.621 { 00:11:10.621 "name": null, 00:11:10.621 "uuid": "95f35266-127d-493a-bd11-5031e2271d15", 00:11:10.621 "is_configured": false, 00:11:10.621 "data_offset": 0, 00:11:10.621 "data_size": 63488 00:11:10.621 }, 00:11:10.621 { 00:11:10.621 "name": "BaseBdev2", 00:11:10.621 "uuid": "624357ab-208e-4897-9c2c-236057bcd1a6", 00:11:10.621 "is_configured": true, 00:11:10.621 "data_offset": 2048, 00:11:10.621 "data_size": 63488 00:11:10.621 }, 00:11:10.621 { 00:11:10.621 "name": "BaseBdev3", 00:11:10.621 "uuid": "b4ba5755-cdfb-49d0-a1fe-822844fa33b2", 00:11:10.621 "is_configured": true, 00:11:10.621 "data_offset": 2048, 00:11:10.621 "data_size": 63488 00:11:10.621 }, 00:11:10.621 { 00:11:10.621 "name": "BaseBdev4", 00:11:10.621 "uuid": "86c40e38-4895-4858-b49f-db8b5f6f70a3", 00:11:10.621 "is_configured": true, 00:11:10.621 "data_offset": 2048, 00:11:10.621 "data_size": 63488 00:11:10.621 } 00:11:10.621 ] 00:11:10.621 }' 00:11:10.621 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.621 03:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.880 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:10.880 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.880 03:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.880 03:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.880 03:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.880 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:10.880 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.880 03:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.880 03:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.880 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 95f35266-127d-493a-bd11-5031e2271d15 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.140 [2024-11-18 03:11:14.499596] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:11.140 [2024-11-18 03:11:14.499890] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:11:11.140 [2024-11-18 03:11:14.499951] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:11.140 NewBaseBdev 00:11:11.140 [2024-11-18 03:11:14.500286] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:11.140 [2024-11-18 03:11:14.500441] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:11:11.140 [2024-11-18 03:11:14.500500] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:11:11.140 [2024-11-18 03:11:14.500667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.140 [ 00:11:11.140 { 00:11:11.140 "name": "NewBaseBdev", 00:11:11.140 "aliases": [ 00:11:11.140 "95f35266-127d-493a-bd11-5031e2271d15" 00:11:11.140 ], 00:11:11.140 "product_name": "Malloc disk", 00:11:11.140 "block_size": 512, 00:11:11.140 "num_blocks": 65536, 00:11:11.140 "uuid": "95f35266-127d-493a-bd11-5031e2271d15", 00:11:11.140 "assigned_rate_limits": { 00:11:11.140 "rw_ios_per_sec": 0, 00:11:11.140 "rw_mbytes_per_sec": 0, 00:11:11.140 "r_mbytes_per_sec": 0, 00:11:11.140 "w_mbytes_per_sec": 0 00:11:11.140 }, 00:11:11.140 "claimed": true, 00:11:11.140 "claim_type": "exclusive_write", 00:11:11.140 "zoned": false, 00:11:11.140 "supported_io_types": { 00:11:11.140 "read": true, 00:11:11.140 "write": true, 00:11:11.140 "unmap": true, 00:11:11.140 "flush": true, 00:11:11.140 "reset": true, 00:11:11.140 "nvme_admin": false, 00:11:11.140 "nvme_io": false, 00:11:11.140 "nvme_io_md": false, 00:11:11.140 "write_zeroes": true, 00:11:11.140 "zcopy": true, 00:11:11.140 "get_zone_info": false, 00:11:11.140 "zone_management": false, 00:11:11.140 "zone_append": false, 00:11:11.140 "compare": false, 00:11:11.140 "compare_and_write": false, 00:11:11.140 "abort": true, 00:11:11.140 "seek_hole": false, 00:11:11.140 "seek_data": false, 00:11:11.140 "copy": true, 00:11:11.140 "nvme_iov_md": false 00:11:11.140 }, 00:11:11.140 "memory_domains": [ 00:11:11.140 { 00:11:11.140 "dma_device_id": "system", 00:11:11.140 "dma_device_type": 1 00:11:11.140 }, 00:11:11.140 { 00:11:11.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.140 "dma_device_type": 2 00:11:11.140 } 00:11:11.140 ], 00:11:11.140 "driver_specific": {} 00:11:11.140 } 00:11:11.140 ] 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.140 03:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.141 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.141 "name": "Existed_Raid", 00:11:11.141 "uuid": "960dbe88-7620-44d2-a0dd-50e58822adf9", 00:11:11.141 "strip_size_kb": 0, 00:11:11.141 "state": "online", 00:11:11.141 "raid_level": "raid1", 00:11:11.141 "superblock": true, 00:11:11.141 "num_base_bdevs": 4, 00:11:11.141 "num_base_bdevs_discovered": 4, 00:11:11.141 "num_base_bdevs_operational": 4, 00:11:11.141 "base_bdevs_list": [ 00:11:11.141 { 00:11:11.141 "name": "NewBaseBdev", 00:11:11.141 "uuid": "95f35266-127d-493a-bd11-5031e2271d15", 00:11:11.141 "is_configured": true, 00:11:11.141 "data_offset": 2048, 00:11:11.141 "data_size": 63488 00:11:11.141 }, 00:11:11.141 { 00:11:11.141 "name": "BaseBdev2", 00:11:11.141 "uuid": "624357ab-208e-4897-9c2c-236057bcd1a6", 00:11:11.141 "is_configured": true, 00:11:11.141 "data_offset": 2048, 00:11:11.141 "data_size": 63488 00:11:11.141 }, 00:11:11.141 { 00:11:11.141 "name": "BaseBdev3", 00:11:11.141 "uuid": "b4ba5755-cdfb-49d0-a1fe-822844fa33b2", 00:11:11.141 "is_configured": true, 00:11:11.141 "data_offset": 2048, 00:11:11.141 "data_size": 63488 00:11:11.141 }, 00:11:11.141 { 00:11:11.141 "name": "BaseBdev4", 00:11:11.141 "uuid": "86c40e38-4895-4858-b49f-db8b5f6f70a3", 00:11:11.141 "is_configured": true, 00:11:11.141 "data_offset": 2048, 00:11:11.141 "data_size": 63488 00:11:11.141 } 00:11:11.141 ] 00:11:11.141 }' 00:11:11.141 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.141 03:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.710 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:11.710 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:11.710 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:11.710 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:11.710 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:11.710 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:11.710 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:11.710 03:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:11.710 03:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.710 03:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.710 [2024-11-18 03:11:14.999483] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:11.710 03:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.710 03:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:11.710 "name": "Existed_Raid", 00:11:11.710 "aliases": [ 00:11:11.710 "960dbe88-7620-44d2-a0dd-50e58822adf9" 00:11:11.710 ], 00:11:11.710 "product_name": "Raid Volume", 00:11:11.710 "block_size": 512, 00:11:11.710 "num_blocks": 63488, 00:11:11.710 "uuid": "960dbe88-7620-44d2-a0dd-50e58822adf9", 00:11:11.710 "assigned_rate_limits": { 00:11:11.710 "rw_ios_per_sec": 0, 00:11:11.710 "rw_mbytes_per_sec": 0, 00:11:11.710 "r_mbytes_per_sec": 0, 00:11:11.710 "w_mbytes_per_sec": 0 00:11:11.710 }, 00:11:11.710 "claimed": false, 00:11:11.710 "zoned": false, 00:11:11.710 "supported_io_types": { 00:11:11.710 "read": true, 00:11:11.710 "write": true, 00:11:11.710 "unmap": false, 00:11:11.710 "flush": false, 00:11:11.710 "reset": true, 00:11:11.710 "nvme_admin": false, 00:11:11.710 "nvme_io": false, 00:11:11.710 "nvme_io_md": false, 00:11:11.710 "write_zeroes": true, 00:11:11.710 "zcopy": false, 00:11:11.710 "get_zone_info": false, 00:11:11.710 "zone_management": false, 00:11:11.710 "zone_append": false, 00:11:11.710 "compare": false, 00:11:11.710 "compare_and_write": false, 00:11:11.710 "abort": false, 00:11:11.710 "seek_hole": false, 00:11:11.710 "seek_data": false, 00:11:11.710 "copy": false, 00:11:11.710 "nvme_iov_md": false 00:11:11.710 }, 00:11:11.710 "memory_domains": [ 00:11:11.710 { 00:11:11.710 "dma_device_id": "system", 00:11:11.710 "dma_device_type": 1 00:11:11.710 }, 00:11:11.710 { 00:11:11.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.710 "dma_device_type": 2 00:11:11.710 }, 00:11:11.710 { 00:11:11.710 "dma_device_id": "system", 00:11:11.710 "dma_device_type": 1 00:11:11.710 }, 00:11:11.710 { 00:11:11.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.710 "dma_device_type": 2 00:11:11.710 }, 00:11:11.710 { 00:11:11.710 "dma_device_id": "system", 00:11:11.710 "dma_device_type": 1 00:11:11.710 }, 00:11:11.710 { 00:11:11.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.710 "dma_device_type": 2 00:11:11.710 }, 00:11:11.710 { 00:11:11.710 "dma_device_id": "system", 00:11:11.710 "dma_device_type": 1 00:11:11.710 }, 00:11:11.710 { 00:11:11.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.710 "dma_device_type": 2 00:11:11.710 } 00:11:11.710 ], 00:11:11.710 "driver_specific": { 00:11:11.710 "raid": { 00:11:11.710 "uuid": "960dbe88-7620-44d2-a0dd-50e58822adf9", 00:11:11.710 "strip_size_kb": 0, 00:11:11.710 "state": "online", 00:11:11.710 "raid_level": "raid1", 00:11:11.710 "superblock": true, 00:11:11.710 "num_base_bdevs": 4, 00:11:11.710 "num_base_bdevs_discovered": 4, 00:11:11.710 "num_base_bdevs_operational": 4, 00:11:11.710 "base_bdevs_list": [ 00:11:11.710 { 00:11:11.710 "name": "NewBaseBdev", 00:11:11.710 "uuid": "95f35266-127d-493a-bd11-5031e2271d15", 00:11:11.710 "is_configured": true, 00:11:11.710 "data_offset": 2048, 00:11:11.710 "data_size": 63488 00:11:11.710 }, 00:11:11.710 { 00:11:11.710 "name": "BaseBdev2", 00:11:11.710 "uuid": "624357ab-208e-4897-9c2c-236057bcd1a6", 00:11:11.710 "is_configured": true, 00:11:11.710 "data_offset": 2048, 00:11:11.710 "data_size": 63488 00:11:11.710 }, 00:11:11.710 { 00:11:11.710 "name": "BaseBdev3", 00:11:11.710 "uuid": "b4ba5755-cdfb-49d0-a1fe-822844fa33b2", 00:11:11.710 "is_configured": true, 00:11:11.710 "data_offset": 2048, 00:11:11.710 "data_size": 63488 00:11:11.710 }, 00:11:11.710 { 00:11:11.710 "name": "BaseBdev4", 00:11:11.710 "uuid": "86c40e38-4895-4858-b49f-db8b5f6f70a3", 00:11:11.710 "is_configured": true, 00:11:11.710 "data_offset": 2048, 00:11:11.710 "data_size": 63488 00:11:11.710 } 00:11:11.710 ] 00:11:11.710 } 00:11:11.710 } 00:11:11.710 }' 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:11.711 BaseBdev2 00:11:11.711 BaseBdev3 00:11:11.711 BaseBdev4' 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.711 03:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.971 [2024-11-18 03:11:15.287091] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:11.971 [2024-11-18 03:11:15.287123] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:11.971 [2024-11-18 03:11:15.287217] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:11.971 [2024-11-18 03:11:15.287517] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:11.971 [2024-11-18 03:11:15.287537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:11:11.971 03:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.971 03:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84785 00:11:11.971 03:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 84785 ']' 00:11:11.971 03:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 84785 00:11:11.971 03:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:11.971 03:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:11.971 03:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84785 00:11:11.971 03:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:11.971 03:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:11.971 03:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84785' 00:11:11.971 killing process with pid 84785 00:11:11.971 03:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 84785 00:11:11.971 [2024-11-18 03:11:15.332782] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:11.971 03:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 84785 00:11:11.971 [2024-11-18 03:11:15.374779] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:12.231 03:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:12.231 00:11:12.231 real 0m9.664s 00:11:12.231 user 0m16.592s 00:11:12.231 sys 0m1.973s 00:11:12.231 03:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:12.231 03:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.231 ************************************ 00:11:12.231 END TEST raid_state_function_test_sb 00:11:12.231 ************************************ 00:11:12.231 03:11:15 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:12.231 03:11:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:12.231 03:11:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:12.231 03:11:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:12.231 ************************************ 00:11:12.231 START TEST raid_superblock_test 00:11:12.231 ************************************ 00:11:12.231 03:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:11:12.231 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:12.231 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:12.231 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:12.231 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:12.231 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:12.231 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:12.231 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:12.231 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:12.231 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:12.231 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:12.231 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:12.231 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:12.231 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:12.231 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:12.231 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:12.231 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85433 00:11:12.231 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:12.231 03:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85433 00:11:12.231 03:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 85433 ']' 00:11:12.231 03:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.231 03:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:12.231 03:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.231 03:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:12.231 03:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.231 [2024-11-18 03:11:15.791309] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:12.231 [2024-11-18 03:11:15.791453] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85433 ] 00:11:12.491 [2024-11-18 03:11:15.954139] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.491 [2024-11-18 03:11:16.008113] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.491 [2024-11-18 03:11:16.053610] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.491 [2024-11-18 03:11:16.053645] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.432 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.433 malloc1 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.433 [2024-11-18 03:11:16.702259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:13.433 [2024-11-18 03:11:16.702398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.433 [2024-11-18 03:11:16.702468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:13.433 [2024-11-18 03:11:16.702519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.433 [2024-11-18 03:11:16.705139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.433 [2024-11-18 03:11:16.705229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:13.433 pt1 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.433 malloc2 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.433 [2024-11-18 03:11:16.743763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:13.433 [2024-11-18 03:11:16.743895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.433 [2024-11-18 03:11:16.743941] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:13.433 [2024-11-18 03:11:16.744002] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.433 [2024-11-18 03:11:16.746767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.433 [2024-11-18 03:11:16.746857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:13.433 pt2 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.433 malloc3 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.433 [2024-11-18 03:11:16.773246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:13.433 [2024-11-18 03:11:16.773369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.433 [2024-11-18 03:11:16.773422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:13.433 [2024-11-18 03:11:16.773471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.433 [2024-11-18 03:11:16.776010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.433 [2024-11-18 03:11:16.776108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:13.433 pt3 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.433 malloc4 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.433 [2024-11-18 03:11:16.806544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:13.433 [2024-11-18 03:11:16.806661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.433 [2024-11-18 03:11:16.806702] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:13.433 [2024-11-18 03:11:16.806741] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.433 [2024-11-18 03:11:16.809172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.433 [2024-11-18 03:11:16.809260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:13.433 pt4 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.433 [2024-11-18 03:11:16.818598] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:13.433 [2024-11-18 03:11:16.820761] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:13.433 [2024-11-18 03:11:16.820818] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:13.433 [2024-11-18 03:11:16.820857] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:13.433 [2024-11-18 03:11:16.821020] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:13.433 [2024-11-18 03:11:16.821035] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:13.433 [2024-11-18 03:11:16.821314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:13.433 [2024-11-18 03:11:16.821450] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:13.433 [2024-11-18 03:11:16.821460] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:13.433 [2024-11-18 03:11:16.821584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.433 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:13.434 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:13.434 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.434 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.434 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.434 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.434 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.434 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.434 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.434 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.434 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.434 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.434 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.434 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.434 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.434 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.434 "name": "raid_bdev1", 00:11:13.434 "uuid": "689847a8-9ade-4bf5-896e-7fc6d1fd4771", 00:11:13.434 "strip_size_kb": 0, 00:11:13.434 "state": "online", 00:11:13.434 "raid_level": "raid1", 00:11:13.434 "superblock": true, 00:11:13.434 "num_base_bdevs": 4, 00:11:13.434 "num_base_bdevs_discovered": 4, 00:11:13.434 "num_base_bdevs_operational": 4, 00:11:13.434 "base_bdevs_list": [ 00:11:13.434 { 00:11:13.434 "name": "pt1", 00:11:13.434 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:13.434 "is_configured": true, 00:11:13.434 "data_offset": 2048, 00:11:13.434 "data_size": 63488 00:11:13.434 }, 00:11:13.434 { 00:11:13.434 "name": "pt2", 00:11:13.434 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:13.434 "is_configured": true, 00:11:13.434 "data_offset": 2048, 00:11:13.434 "data_size": 63488 00:11:13.434 }, 00:11:13.434 { 00:11:13.434 "name": "pt3", 00:11:13.434 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:13.434 "is_configured": true, 00:11:13.434 "data_offset": 2048, 00:11:13.434 "data_size": 63488 00:11:13.434 }, 00:11:13.434 { 00:11:13.434 "name": "pt4", 00:11:13.434 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:13.434 "is_configured": true, 00:11:13.434 "data_offset": 2048, 00:11:13.434 "data_size": 63488 00:11:13.434 } 00:11:13.434 ] 00:11:13.434 }' 00:11:13.434 03:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.434 03:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.003 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:14.003 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:14.003 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:14.003 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:14.003 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:14.003 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:14.003 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:14.003 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:14.003 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.003 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.003 [2024-11-18 03:11:17.294116] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:14.003 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.003 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:14.003 "name": "raid_bdev1", 00:11:14.003 "aliases": [ 00:11:14.003 "689847a8-9ade-4bf5-896e-7fc6d1fd4771" 00:11:14.003 ], 00:11:14.003 "product_name": "Raid Volume", 00:11:14.003 "block_size": 512, 00:11:14.003 "num_blocks": 63488, 00:11:14.003 "uuid": "689847a8-9ade-4bf5-896e-7fc6d1fd4771", 00:11:14.003 "assigned_rate_limits": { 00:11:14.003 "rw_ios_per_sec": 0, 00:11:14.003 "rw_mbytes_per_sec": 0, 00:11:14.003 "r_mbytes_per_sec": 0, 00:11:14.003 "w_mbytes_per_sec": 0 00:11:14.003 }, 00:11:14.003 "claimed": false, 00:11:14.003 "zoned": false, 00:11:14.003 "supported_io_types": { 00:11:14.003 "read": true, 00:11:14.003 "write": true, 00:11:14.003 "unmap": false, 00:11:14.003 "flush": false, 00:11:14.003 "reset": true, 00:11:14.003 "nvme_admin": false, 00:11:14.003 "nvme_io": false, 00:11:14.003 "nvme_io_md": false, 00:11:14.003 "write_zeroes": true, 00:11:14.003 "zcopy": false, 00:11:14.003 "get_zone_info": false, 00:11:14.003 "zone_management": false, 00:11:14.003 "zone_append": false, 00:11:14.003 "compare": false, 00:11:14.003 "compare_and_write": false, 00:11:14.003 "abort": false, 00:11:14.003 "seek_hole": false, 00:11:14.003 "seek_data": false, 00:11:14.003 "copy": false, 00:11:14.003 "nvme_iov_md": false 00:11:14.003 }, 00:11:14.003 "memory_domains": [ 00:11:14.003 { 00:11:14.003 "dma_device_id": "system", 00:11:14.003 "dma_device_type": 1 00:11:14.003 }, 00:11:14.003 { 00:11:14.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.003 "dma_device_type": 2 00:11:14.003 }, 00:11:14.003 { 00:11:14.003 "dma_device_id": "system", 00:11:14.003 "dma_device_type": 1 00:11:14.003 }, 00:11:14.003 { 00:11:14.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.003 "dma_device_type": 2 00:11:14.003 }, 00:11:14.003 { 00:11:14.003 "dma_device_id": "system", 00:11:14.003 "dma_device_type": 1 00:11:14.003 }, 00:11:14.003 { 00:11:14.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.003 "dma_device_type": 2 00:11:14.003 }, 00:11:14.003 { 00:11:14.003 "dma_device_id": "system", 00:11:14.003 "dma_device_type": 1 00:11:14.003 }, 00:11:14.003 { 00:11:14.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.003 "dma_device_type": 2 00:11:14.003 } 00:11:14.003 ], 00:11:14.003 "driver_specific": { 00:11:14.003 "raid": { 00:11:14.004 "uuid": "689847a8-9ade-4bf5-896e-7fc6d1fd4771", 00:11:14.004 "strip_size_kb": 0, 00:11:14.004 "state": "online", 00:11:14.004 "raid_level": "raid1", 00:11:14.004 "superblock": true, 00:11:14.004 "num_base_bdevs": 4, 00:11:14.004 "num_base_bdevs_discovered": 4, 00:11:14.004 "num_base_bdevs_operational": 4, 00:11:14.004 "base_bdevs_list": [ 00:11:14.004 { 00:11:14.004 "name": "pt1", 00:11:14.004 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:14.004 "is_configured": true, 00:11:14.004 "data_offset": 2048, 00:11:14.004 "data_size": 63488 00:11:14.004 }, 00:11:14.004 { 00:11:14.004 "name": "pt2", 00:11:14.004 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:14.004 "is_configured": true, 00:11:14.004 "data_offset": 2048, 00:11:14.004 "data_size": 63488 00:11:14.004 }, 00:11:14.004 { 00:11:14.004 "name": "pt3", 00:11:14.004 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:14.004 "is_configured": true, 00:11:14.004 "data_offset": 2048, 00:11:14.004 "data_size": 63488 00:11:14.004 }, 00:11:14.004 { 00:11:14.004 "name": "pt4", 00:11:14.004 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:14.004 "is_configured": true, 00:11:14.004 "data_offset": 2048, 00:11:14.004 "data_size": 63488 00:11:14.004 } 00:11:14.004 ] 00:11:14.004 } 00:11:14.004 } 00:11:14.004 }' 00:11:14.004 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:14.004 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:14.004 pt2 00:11:14.004 pt3 00:11:14.004 pt4' 00:11:14.004 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.004 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:14.004 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.004 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:14.004 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.004 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.004 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.004 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.004 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.004 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.004 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.004 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.004 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:14.004 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.004 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.004 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.004 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.004 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.004 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.004 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.004 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:14.004 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.004 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.004 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.264 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.264 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.264 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.264 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:14.264 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.264 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.264 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.264 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.264 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.264 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.264 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:14.264 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.265 [2024-11-18 03:11:17.629458] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=689847a8-9ade-4bf5-896e-7fc6d1fd4771 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 689847a8-9ade-4bf5-896e-7fc6d1fd4771 ']' 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.265 [2024-11-18 03:11:17.677082] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:14.265 [2024-11-18 03:11:17.677160] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.265 [2024-11-18 03:11:17.677268] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.265 [2024-11-18 03:11:17.677389] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.265 [2024-11-18 03:11:17.677438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.265 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.526 [2024-11-18 03:11:17.844869] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:14.526 [2024-11-18 03:11:17.846937] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:14.526 [2024-11-18 03:11:17.847019] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:14.526 [2024-11-18 03:11:17.847054] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:14.526 [2024-11-18 03:11:17.847106] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:14.526 [2024-11-18 03:11:17.847155] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:14.526 [2024-11-18 03:11:17.847177] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:14.526 [2024-11-18 03:11:17.847195] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:14.526 [2024-11-18 03:11:17.847211] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:14.526 [2024-11-18 03:11:17.847221] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:11:14.526 request: 00:11:14.526 { 00:11:14.526 "name": "raid_bdev1", 00:11:14.526 "raid_level": "raid1", 00:11:14.526 "base_bdevs": [ 00:11:14.526 "malloc1", 00:11:14.526 "malloc2", 00:11:14.526 "malloc3", 00:11:14.526 "malloc4" 00:11:14.526 ], 00:11:14.526 "superblock": false, 00:11:14.526 "method": "bdev_raid_create", 00:11:14.526 "req_id": 1 00:11:14.526 } 00:11:14.526 Got JSON-RPC error response 00:11:14.526 response: 00:11:14.526 { 00:11:14.526 "code": -17, 00:11:14.526 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:14.526 } 00:11:14.526 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:14.526 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:14.526 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:14.526 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:14.526 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:14.526 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:14.526 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.526 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.526 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.526 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.527 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:14.527 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:14.527 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:14.527 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.527 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.527 [2024-11-18 03:11:17.908725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:14.527 [2024-11-18 03:11:17.908836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.527 [2024-11-18 03:11:17.908893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:14.527 [2024-11-18 03:11:17.908936] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.527 [2024-11-18 03:11:17.911469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.527 [2024-11-18 03:11:17.911560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:14.527 [2024-11-18 03:11:17.911661] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:14.527 [2024-11-18 03:11:17.911701] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:14.527 pt1 00:11:14.527 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.527 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:14.527 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.527 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.527 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.527 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.527 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.527 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.527 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.527 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.527 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.527 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.527 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.527 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.527 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.527 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.527 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.527 "name": "raid_bdev1", 00:11:14.527 "uuid": "689847a8-9ade-4bf5-896e-7fc6d1fd4771", 00:11:14.527 "strip_size_kb": 0, 00:11:14.527 "state": "configuring", 00:11:14.527 "raid_level": "raid1", 00:11:14.527 "superblock": true, 00:11:14.527 "num_base_bdevs": 4, 00:11:14.527 "num_base_bdevs_discovered": 1, 00:11:14.527 "num_base_bdevs_operational": 4, 00:11:14.527 "base_bdevs_list": [ 00:11:14.527 { 00:11:14.527 "name": "pt1", 00:11:14.527 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:14.527 "is_configured": true, 00:11:14.527 "data_offset": 2048, 00:11:14.527 "data_size": 63488 00:11:14.527 }, 00:11:14.527 { 00:11:14.527 "name": null, 00:11:14.527 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:14.527 "is_configured": false, 00:11:14.527 "data_offset": 2048, 00:11:14.527 "data_size": 63488 00:11:14.527 }, 00:11:14.527 { 00:11:14.527 "name": null, 00:11:14.527 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:14.527 "is_configured": false, 00:11:14.527 "data_offset": 2048, 00:11:14.527 "data_size": 63488 00:11:14.527 }, 00:11:14.527 { 00:11:14.527 "name": null, 00:11:14.527 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:14.527 "is_configured": false, 00:11:14.527 "data_offset": 2048, 00:11:14.527 "data_size": 63488 00:11:14.527 } 00:11:14.527 ] 00:11:14.527 }' 00:11:14.527 03:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.527 03:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.097 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:15.097 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:15.097 03:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.097 03:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.097 [2024-11-18 03:11:18.420028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:15.097 [2024-11-18 03:11:18.420156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.097 [2024-11-18 03:11:18.420215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:15.097 [2024-11-18 03:11:18.420252] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.097 [2024-11-18 03:11:18.420732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.098 [2024-11-18 03:11:18.420796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:15.098 [2024-11-18 03:11:18.420924] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:15.098 [2024-11-18 03:11:18.421011] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:15.098 pt2 00:11:15.098 03:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.098 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:15.098 03:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.098 03:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.098 [2024-11-18 03:11:18.428007] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:15.098 03:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.098 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:15.098 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.098 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.098 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.098 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.098 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.098 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.098 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.098 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.098 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.098 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.098 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.098 03:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.098 03:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.098 03:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.098 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.098 "name": "raid_bdev1", 00:11:15.098 "uuid": "689847a8-9ade-4bf5-896e-7fc6d1fd4771", 00:11:15.098 "strip_size_kb": 0, 00:11:15.098 "state": "configuring", 00:11:15.098 "raid_level": "raid1", 00:11:15.098 "superblock": true, 00:11:15.098 "num_base_bdevs": 4, 00:11:15.098 "num_base_bdevs_discovered": 1, 00:11:15.098 "num_base_bdevs_operational": 4, 00:11:15.098 "base_bdevs_list": [ 00:11:15.098 { 00:11:15.098 "name": "pt1", 00:11:15.098 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:15.098 "is_configured": true, 00:11:15.098 "data_offset": 2048, 00:11:15.098 "data_size": 63488 00:11:15.098 }, 00:11:15.098 { 00:11:15.098 "name": null, 00:11:15.098 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:15.098 "is_configured": false, 00:11:15.098 "data_offset": 0, 00:11:15.098 "data_size": 63488 00:11:15.098 }, 00:11:15.098 { 00:11:15.098 "name": null, 00:11:15.098 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:15.098 "is_configured": false, 00:11:15.098 "data_offset": 2048, 00:11:15.098 "data_size": 63488 00:11:15.098 }, 00:11:15.098 { 00:11:15.098 "name": null, 00:11:15.098 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:15.098 "is_configured": false, 00:11:15.098 "data_offset": 2048, 00:11:15.098 "data_size": 63488 00:11:15.098 } 00:11:15.098 ] 00:11:15.098 }' 00:11:15.098 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.098 03:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.358 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:15.358 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:15.358 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:15.358 03:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.358 03:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.358 [2024-11-18 03:11:18.915169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:15.358 [2024-11-18 03:11:18.915314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.358 [2024-11-18 03:11:18.915340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:15.358 [2024-11-18 03:11:18.915354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.358 [2024-11-18 03:11:18.915797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.358 [2024-11-18 03:11:18.915820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:15.358 [2024-11-18 03:11:18.915902] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:15.358 [2024-11-18 03:11:18.915928] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:15.358 pt2 00:11:15.358 03:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.358 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:15.358 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:15.358 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:15.358 03:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.358 03:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.358 [2024-11-18 03:11:18.927090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:15.358 [2024-11-18 03:11:18.927156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.358 [2024-11-18 03:11:18.927177] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:15.358 [2024-11-18 03:11:18.927189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.358 [2024-11-18 03:11:18.927590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.358 [2024-11-18 03:11:18.927618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:15.358 [2024-11-18 03:11:18.927688] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:15.358 [2024-11-18 03:11:18.927711] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:15.619 pt3 00:11:15.619 03:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.619 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:15.619 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:15.619 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:15.619 03:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.619 03:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.619 [2024-11-18 03:11:18.939092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:15.619 [2024-11-18 03:11:18.939148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.619 [2024-11-18 03:11:18.939166] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:15.619 [2024-11-18 03:11:18.939177] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.619 [2024-11-18 03:11:18.939546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.619 [2024-11-18 03:11:18.939568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:15.619 [2024-11-18 03:11:18.939631] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:15.619 [2024-11-18 03:11:18.939661] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:15.619 [2024-11-18 03:11:18.939783] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:15.619 [2024-11-18 03:11:18.939797] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:15.619 [2024-11-18 03:11:18.940092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:15.619 [2024-11-18 03:11:18.940232] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:15.619 [2024-11-18 03:11:18.940243] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:11:15.619 [2024-11-18 03:11:18.940357] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.619 pt4 00:11:15.619 03:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.619 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:15.619 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:15.619 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:15.619 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.619 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.619 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.619 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.619 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.619 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.619 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.619 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.619 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.619 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.619 03:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.619 03:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.619 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.619 03:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.619 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.619 "name": "raid_bdev1", 00:11:15.619 "uuid": "689847a8-9ade-4bf5-896e-7fc6d1fd4771", 00:11:15.619 "strip_size_kb": 0, 00:11:15.619 "state": "online", 00:11:15.619 "raid_level": "raid1", 00:11:15.619 "superblock": true, 00:11:15.619 "num_base_bdevs": 4, 00:11:15.619 "num_base_bdevs_discovered": 4, 00:11:15.619 "num_base_bdevs_operational": 4, 00:11:15.619 "base_bdevs_list": [ 00:11:15.619 { 00:11:15.619 "name": "pt1", 00:11:15.619 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:15.619 "is_configured": true, 00:11:15.619 "data_offset": 2048, 00:11:15.619 "data_size": 63488 00:11:15.619 }, 00:11:15.619 { 00:11:15.619 "name": "pt2", 00:11:15.619 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:15.619 "is_configured": true, 00:11:15.619 "data_offset": 2048, 00:11:15.619 "data_size": 63488 00:11:15.619 }, 00:11:15.619 { 00:11:15.619 "name": "pt3", 00:11:15.619 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:15.619 "is_configured": true, 00:11:15.619 "data_offset": 2048, 00:11:15.619 "data_size": 63488 00:11:15.619 }, 00:11:15.619 { 00:11:15.619 "name": "pt4", 00:11:15.619 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:15.619 "is_configured": true, 00:11:15.619 "data_offset": 2048, 00:11:15.619 "data_size": 63488 00:11:15.619 } 00:11:15.619 ] 00:11:15.619 }' 00:11:15.619 03:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.619 03:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.878 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:15.878 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:15.878 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:15.878 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:15.878 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:15.878 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:15.878 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:15.878 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:15.878 03:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.878 03:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.137 [2024-11-18 03:11:19.455114] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:16.137 03:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.137 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:16.137 "name": "raid_bdev1", 00:11:16.137 "aliases": [ 00:11:16.137 "689847a8-9ade-4bf5-896e-7fc6d1fd4771" 00:11:16.137 ], 00:11:16.137 "product_name": "Raid Volume", 00:11:16.137 "block_size": 512, 00:11:16.137 "num_blocks": 63488, 00:11:16.137 "uuid": "689847a8-9ade-4bf5-896e-7fc6d1fd4771", 00:11:16.137 "assigned_rate_limits": { 00:11:16.137 "rw_ios_per_sec": 0, 00:11:16.137 "rw_mbytes_per_sec": 0, 00:11:16.137 "r_mbytes_per_sec": 0, 00:11:16.137 "w_mbytes_per_sec": 0 00:11:16.137 }, 00:11:16.137 "claimed": false, 00:11:16.137 "zoned": false, 00:11:16.137 "supported_io_types": { 00:11:16.137 "read": true, 00:11:16.137 "write": true, 00:11:16.137 "unmap": false, 00:11:16.137 "flush": false, 00:11:16.137 "reset": true, 00:11:16.137 "nvme_admin": false, 00:11:16.137 "nvme_io": false, 00:11:16.137 "nvme_io_md": false, 00:11:16.137 "write_zeroes": true, 00:11:16.137 "zcopy": false, 00:11:16.137 "get_zone_info": false, 00:11:16.137 "zone_management": false, 00:11:16.137 "zone_append": false, 00:11:16.137 "compare": false, 00:11:16.137 "compare_and_write": false, 00:11:16.137 "abort": false, 00:11:16.137 "seek_hole": false, 00:11:16.137 "seek_data": false, 00:11:16.137 "copy": false, 00:11:16.137 "nvme_iov_md": false 00:11:16.137 }, 00:11:16.137 "memory_domains": [ 00:11:16.137 { 00:11:16.137 "dma_device_id": "system", 00:11:16.137 "dma_device_type": 1 00:11:16.138 }, 00:11:16.138 { 00:11:16.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.138 "dma_device_type": 2 00:11:16.138 }, 00:11:16.138 { 00:11:16.138 "dma_device_id": "system", 00:11:16.138 "dma_device_type": 1 00:11:16.138 }, 00:11:16.138 { 00:11:16.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.138 "dma_device_type": 2 00:11:16.138 }, 00:11:16.138 { 00:11:16.138 "dma_device_id": "system", 00:11:16.138 "dma_device_type": 1 00:11:16.138 }, 00:11:16.138 { 00:11:16.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.138 "dma_device_type": 2 00:11:16.138 }, 00:11:16.138 { 00:11:16.138 "dma_device_id": "system", 00:11:16.138 "dma_device_type": 1 00:11:16.138 }, 00:11:16.138 { 00:11:16.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.138 "dma_device_type": 2 00:11:16.138 } 00:11:16.138 ], 00:11:16.138 "driver_specific": { 00:11:16.138 "raid": { 00:11:16.138 "uuid": "689847a8-9ade-4bf5-896e-7fc6d1fd4771", 00:11:16.138 "strip_size_kb": 0, 00:11:16.138 "state": "online", 00:11:16.138 "raid_level": "raid1", 00:11:16.138 "superblock": true, 00:11:16.138 "num_base_bdevs": 4, 00:11:16.138 "num_base_bdevs_discovered": 4, 00:11:16.138 "num_base_bdevs_operational": 4, 00:11:16.138 "base_bdevs_list": [ 00:11:16.138 { 00:11:16.138 "name": "pt1", 00:11:16.138 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:16.138 "is_configured": true, 00:11:16.138 "data_offset": 2048, 00:11:16.138 "data_size": 63488 00:11:16.138 }, 00:11:16.138 { 00:11:16.138 "name": "pt2", 00:11:16.138 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:16.138 "is_configured": true, 00:11:16.138 "data_offset": 2048, 00:11:16.138 "data_size": 63488 00:11:16.138 }, 00:11:16.138 { 00:11:16.138 "name": "pt3", 00:11:16.138 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:16.138 "is_configured": true, 00:11:16.138 "data_offset": 2048, 00:11:16.138 "data_size": 63488 00:11:16.138 }, 00:11:16.138 { 00:11:16.138 "name": "pt4", 00:11:16.138 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:16.138 "is_configured": true, 00:11:16.138 "data_offset": 2048, 00:11:16.138 "data_size": 63488 00:11:16.138 } 00:11:16.138 ] 00:11:16.138 } 00:11:16.138 } 00:11:16.138 }' 00:11:16.138 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:16.138 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:16.138 pt2 00:11:16.138 pt3 00:11:16.138 pt4' 00:11:16.138 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.138 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:16.138 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.138 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.138 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:16.138 03:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.138 03:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.138 03:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.138 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.138 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.138 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.138 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:16.138 03:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.138 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.138 03:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.138 03:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.138 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.138 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.138 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.138 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:16.138 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.138 03:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.138 03:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.398 03:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.398 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.398 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.398 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.398 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.398 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:16.398 03:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.398 03:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.398 03:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.398 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.398 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.398 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:16.398 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:16.398 03:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.398 03:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.398 [2024-11-18 03:11:19.802462] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:16.398 03:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.398 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 689847a8-9ade-4bf5-896e-7fc6d1fd4771 '!=' 689847a8-9ade-4bf5-896e-7fc6d1fd4771 ']' 00:11:16.398 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:16.398 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:16.398 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:16.399 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:16.399 03:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.399 03:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.399 [2024-11-18 03:11:19.846172] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:16.399 03:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.399 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:16.399 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:16.399 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.399 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.399 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.399 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:16.399 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.399 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.399 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.399 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.399 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.399 03:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.399 03:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.399 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.399 03:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.399 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.399 "name": "raid_bdev1", 00:11:16.399 "uuid": "689847a8-9ade-4bf5-896e-7fc6d1fd4771", 00:11:16.399 "strip_size_kb": 0, 00:11:16.399 "state": "online", 00:11:16.399 "raid_level": "raid1", 00:11:16.399 "superblock": true, 00:11:16.399 "num_base_bdevs": 4, 00:11:16.399 "num_base_bdevs_discovered": 3, 00:11:16.399 "num_base_bdevs_operational": 3, 00:11:16.399 "base_bdevs_list": [ 00:11:16.399 { 00:11:16.399 "name": null, 00:11:16.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.399 "is_configured": false, 00:11:16.399 "data_offset": 0, 00:11:16.399 "data_size": 63488 00:11:16.399 }, 00:11:16.399 { 00:11:16.399 "name": "pt2", 00:11:16.399 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:16.399 "is_configured": true, 00:11:16.399 "data_offset": 2048, 00:11:16.399 "data_size": 63488 00:11:16.399 }, 00:11:16.399 { 00:11:16.399 "name": "pt3", 00:11:16.399 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:16.399 "is_configured": true, 00:11:16.399 "data_offset": 2048, 00:11:16.399 "data_size": 63488 00:11:16.399 }, 00:11:16.399 { 00:11:16.399 "name": "pt4", 00:11:16.399 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:16.399 "is_configured": true, 00:11:16.399 "data_offset": 2048, 00:11:16.399 "data_size": 63488 00:11:16.399 } 00:11:16.399 ] 00:11:16.399 }' 00:11:16.399 03:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.399 03:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.969 [2024-11-18 03:11:20.301371] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:16.969 [2024-11-18 03:11:20.301404] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:16.969 [2024-11-18 03:11:20.301500] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:16.969 [2024-11-18 03:11:20.301581] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:16.969 [2024-11-18 03:11:20.301595] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.969 [2024-11-18 03:11:20.401184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:16.969 [2024-11-18 03:11:20.401254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.969 [2024-11-18 03:11:20.401276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:16.969 [2024-11-18 03:11:20.401288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.969 [2024-11-18 03:11:20.403780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.969 [2024-11-18 03:11:20.403884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:16.969 [2024-11-18 03:11:20.403988] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:16.969 [2024-11-18 03:11:20.404029] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:16.969 pt2 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.969 "name": "raid_bdev1", 00:11:16.969 "uuid": "689847a8-9ade-4bf5-896e-7fc6d1fd4771", 00:11:16.969 "strip_size_kb": 0, 00:11:16.969 "state": "configuring", 00:11:16.969 "raid_level": "raid1", 00:11:16.969 "superblock": true, 00:11:16.969 "num_base_bdevs": 4, 00:11:16.969 "num_base_bdevs_discovered": 1, 00:11:16.969 "num_base_bdevs_operational": 3, 00:11:16.969 "base_bdevs_list": [ 00:11:16.969 { 00:11:16.969 "name": null, 00:11:16.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.969 "is_configured": false, 00:11:16.969 "data_offset": 2048, 00:11:16.969 "data_size": 63488 00:11:16.969 }, 00:11:16.969 { 00:11:16.969 "name": "pt2", 00:11:16.969 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:16.969 "is_configured": true, 00:11:16.969 "data_offset": 2048, 00:11:16.969 "data_size": 63488 00:11:16.969 }, 00:11:16.969 { 00:11:16.969 "name": null, 00:11:16.969 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:16.969 "is_configured": false, 00:11:16.969 "data_offset": 2048, 00:11:16.969 "data_size": 63488 00:11:16.969 }, 00:11:16.969 { 00:11:16.969 "name": null, 00:11:16.969 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:16.969 "is_configured": false, 00:11:16.969 "data_offset": 2048, 00:11:16.969 "data_size": 63488 00:11:16.969 } 00:11:16.969 ] 00:11:16.969 }' 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.969 03:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.538 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:17.538 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:17.538 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:17.538 03:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.538 03:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.538 [2024-11-18 03:11:20.904454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:17.538 [2024-11-18 03:11:20.904600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.538 [2024-11-18 03:11:20.904669] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:17.538 [2024-11-18 03:11:20.904709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.538 [2024-11-18 03:11:20.905239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.538 [2024-11-18 03:11:20.905311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:17.538 [2024-11-18 03:11:20.905427] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:17.538 [2024-11-18 03:11:20.905485] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:17.538 pt3 00:11:17.538 03:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.538 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:17.538 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.538 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.538 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.538 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.539 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.539 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.539 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.539 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.539 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.539 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.539 03:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.539 03:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.539 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.539 03:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.539 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.539 "name": "raid_bdev1", 00:11:17.539 "uuid": "689847a8-9ade-4bf5-896e-7fc6d1fd4771", 00:11:17.539 "strip_size_kb": 0, 00:11:17.539 "state": "configuring", 00:11:17.539 "raid_level": "raid1", 00:11:17.539 "superblock": true, 00:11:17.539 "num_base_bdevs": 4, 00:11:17.539 "num_base_bdevs_discovered": 2, 00:11:17.539 "num_base_bdevs_operational": 3, 00:11:17.539 "base_bdevs_list": [ 00:11:17.539 { 00:11:17.539 "name": null, 00:11:17.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.539 "is_configured": false, 00:11:17.539 "data_offset": 2048, 00:11:17.539 "data_size": 63488 00:11:17.539 }, 00:11:17.539 { 00:11:17.539 "name": "pt2", 00:11:17.539 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:17.539 "is_configured": true, 00:11:17.539 "data_offset": 2048, 00:11:17.539 "data_size": 63488 00:11:17.539 }, 00:11:17.539 { 00:11:17.539 "name": "pt3", 00:11:17.539 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:17.539 "is_configured": true, 00:11:17.539 "data_offset": 2048, 00:11:17.539 "data_size": 63488 00:11:17.539 }, 00:11:17.539 { 00:11:17.539 "name": null, 00:11:17.539 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:17.539 "is_configured": false, 00:11:17.539 "data_offset": 2048, 00:11:17.539 "data_size": 63488 00:11:17.539 } 00:11:17.539 ] 00:11:17.539 }' 00:11:17.539 03:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.539 03:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.109 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:18.109 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:18.109 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:18.109 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:18.109 03:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.109 03:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.109 [2024-11-18 03:11:21.411612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:18.109 [2024-11-18 03:11:21.411766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.109 [2024-11-18 03:11:21.411828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:11:18.109 [2024-11-18 03:11:21.411878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.109 [2024-11-18 03:11:21.412364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.109 [2024-11-18 03:11:21.412433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:18.109 [2024-11-18 03:11:21.412553] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:18.109 [2024-11-18 03:11:21.412621] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:18.109 [2024-11-18 03:11:21.412772] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:11:18.109 [2024-11-18 03:11:21.412818] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:18.109 [2024-11-18 03:11:21.413117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:18.109 [2024-11-18 03:11:21.413311] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:11:18.109 [2024-11-18 03:11:21.413361] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:11:18.109 [2024-11-18 03:11:21.413532] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.109 pt4 00:11:18.109 03:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.109 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:18.109 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.109 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.109 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.109 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.109 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.109 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.109 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.109 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.109 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.109 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.109 03:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.109 03:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.109 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.109 03:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.109 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.109 "name": "raid_bdev1", 00:11:18.109 "uuid": "689847a8-9ade-4bf5-896e-7fc6d1fd4771", 00:11:18.109 "strip_size_kb": 0, 00:11:18.109 "state": "online", 00:11:18.109 "raid_level": "raid1", 00:11:18.109 "superblock": true, 00:11:18.109 "num_base_bdevs": 4, 00:11:18.109 "num_base_bdevs_discovered": 3, 00:11:18.109 "num_base_bdevs_operational": 3, 00:11:18.109 "base_bdevs_list": [ 00:11:18.109 { 00:11:18.109 "name": null, 00:11:18.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.109 "is_configured": false, 00:11:18.109 "data_offset": 2048, 00:11:18.109 "data_size": 63488 00:11:18.109 }, 00:11:18.109 { 00:11:18.109 "name": "pt2", 00:11:18.109 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:18.109 "is_configured": true, 00:11:18.109 "data_offset": 2048, 00:11:18.109 "data_size": 63488 00:11:18.109 }, 00:11:18.109 { 00:11:18.109 "name": "pt3", 00:11:18.109 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:18.109 "is_configured": true, 00:11:18.109 "data_offset": 2048, 00:11:18.109 "data_size": 63488 00:11:18.109 }, 00:11:18.109 { 00:11:18.109 "name": "pt4", 00:11:18.109 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:18.109 "is_configured": true, 00:11:18.109 "data_offset": 2048, 00:11:18.109 "data_size": 63488 00:11:18.109 } 00:11:18.109 ] 00:11:18.109 }' 00:11:18.109 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.109 03:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.370 [2024-11-18 03:11:21.842979] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:18.370 [2024-11-18 03:11:21.843092] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:18.370 [2024-11-18 03:11:21.843187] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:18.370 [2024-11-18 03:11:21.843273] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:18.370 [2024-11-18 03:11:21.843285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.370 [2024-11-18 03:11:21.918840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:18.370 [2024-11-18 03:11:21.918980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.370 [2024-11-18 03:11:21.919064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:11:18.370 [2024-11-18 03:11:21.919107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.370 [2024-11-18 03:11:21.921684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.370 [2024-11-18 03:11:21.921769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:18.370 [2024-11-18 03:11:21.921884] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:18.370 [2024-11-18 03:11:21.921971] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:18.370 [2024-11-18 03:11:21.922135] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:18.370 [2024-11-18 03:11:21.922200] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:18.370 [2024-11-18 03:11:21.922254] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:11:18.370 [2024-11-18 03:11:21.922350] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:18.370 [2024-11-18 03:11:21.922493] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:18.370 pt1 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.370 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.371 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.371 03:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.371 03:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.371 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.630 03:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.630 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.630 "name": "raid_bdev1", 00:11:18.630 "uuid": "689847a8-9ade-4bf5-896e-7fc6d1fd4771", 00:11:18.630 "strip_size_kb": 0, 00:11:18.630 "state": "configuring", 00:11:18.630 "raid_level": "raid1", 00:11:18.630 "superblock": true, 00:11:18.630 "num_base_bdevs": 4, 00:11:18.630 "num_base_bdevs_discovered": 2, 00:11:18.630 "num_base_bdevs_operational": 3, 00:11:18.630 "base_bdevs_list": [ 00:11:18.630 { 00:11:18.630 "name": null, 00:11:18.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.630 "is_configured": false, 00:11:18.630 "data_offset": 2048, 00:11:18.631 "data_size": 63488 00:11:18.631 }, 00:11:18.631 { 00:11:18.631 "name": "pt2", 00:11:18.631 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:18.631 "is_configured": true, 00:11:18.631 "data_offset": 2048, 00:11:18.631 "data_size": 63488 00:11:18.631 }, 00:11:18.631 { 00:11:18.631 "name": "pt3", 00:11:18.631 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:18.631 "is_configured": true, 00:11:18.631 "data_offset": 2048, 00:11:18.631 "data_size": 63488 00:11:18.631 }, 00:11:18.631 { 00:11:18.631 "name": null, 00:11:18.631 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:18.631 "is_configured": false, 00:11:18.631 "data_offset": 2048, 00:11:18.631 "data_size": 63488 00:11:18.631 } 00:11:18.631 ] 00:11:18.631 }' 00:11:18.631 03:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.631 03:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.891 03:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:18.891 03:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:18.891 03:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.891 03:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.891 03:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.891 03:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:18.891 03:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:18.891 03:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.891 03:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.891 [2024-11-18 03:11:22.457952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:18.891 [2024-11-18 03:11:22.458094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.891 [2024-11-18 03:11:22.458146] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:18.891 [2024-11-18 03:11:22.458184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.891 [2024-11-18 03:11:22.458682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.891 [2024-11-18 03:11:22.458750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:18.891 [2024-11-18 03:11:22.458870] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:18.891 [2024-11-18 03:11:22.458931] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:18.891 [2024-11-18 03:11:22.459109] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:11:18.891 [2024-11-18 03:11:22.459162] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:18.891 [2024-11-18 03:11:22.459454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:18.891 [2024-11-18 03:11:22.459631] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:11:18.891 [2024-11-18 03:11:22.459674] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:11:18.891 [2024-11-18 03:11:22.459840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.891 pt4 00:11:18.891 03:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.891 03:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:18.891 03:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.891 03:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.891 03:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.891 03:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.891 03:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.891 03:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.891 03:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.891 03:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.151 03:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.151 03:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.151 03:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.151 03:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.151 03:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.151 03:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.151 03:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.151 "name": "raid_bdev1", 00:11:19.151 "uuid": "689847a8-9ade-4bf5-896e-7fc6d1fd4771", 00:11:19.151 "strip_size_kb": 0, 00:11:19.151 "state": "online", 00:11:19.151 "raid_level": "raid1", 00:11:19.151 "superblock": true, 00:11:19.151 "num_base_bdevs": 4, 00:11:19.151 "num_base_bdevs_discovered": 3, 00:11:19.151 "num_base_bdevs_operational": 3, 00:11:19.151 "base_bdevs_list": [ 00:11:19.151 { 00:11:19.151 "name": null, 00:11:19.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.151 "is_configured": false, 00:11:19.151 "data_offset": 2048, 00:11:19.151 "data_size": 63488 00:11:19.151 }, 00:11:19.151 { 00:11:19.151 "name": "pt2", 00:11:19.151 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:19.151 "is_configured": true, 00:11:19.151 "data_offset": 2048, 00:11:19.151 "data_size": 63488 00:11:19.151 }, 00:11:19.151 { 00:11:19.151 "name": "pt3", 00:11:19.151 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:19.151 "is_configured": true, 00:11:19.151 "data_offset": 2048, 00:11:19.151 "data_size": 63488 00:11:19.151 }, 00:11:19.151 { 00:11:19.151 "name": "pt4", 00:11:19.151 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:19.151 "is_configured": true, 00:11:19.151 "data_offset": 2048, 00:11:19.151 "data_size": 63488 00:11:19.151 } 00:11:19.151 ] 00:11:19.151 }' 00:11:19.151 03:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.151 03:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.411 03:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:19.411 03:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.411 03:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.411 03:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:19.411 03:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.411 03:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:19.411 03:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:19.411 03:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:19.411 03:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.411 03:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.411 [2024-11-18 03:11:22.969435] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:19.671 03:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.671 03:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 689847a8-9ade-4bf5-896e-7fc6d1fd4771 '!=' 689847a8-9ade-4bf5-896e-7fc6d1fd4771 ']' 00:11:19.671 03:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85433 00:11:19.671 03:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 85433 ']' 00:11:19.671 03:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 85433 00:11:19.671 03:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:19.671 03:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:19.671 03:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85433 00:11:19.671 killing process with pid 85433 00:11:19.671 03:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:19.671 03:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:19.671 03:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85433' 00:11:19.671 03:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 85433 00:11:19.671 [2024-11-18 03:11:23.031209] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:19.671 [2024-11-18 03:11:23.031309] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.671 03:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 85433 00:11:19.671 [2024-11-18 03:11:23.031397] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:19.671 [2024-11-18 03:11:23.031408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:11:19.671 [2024-11-18 03:11:23.077099] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:19.941 03:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:19.941 00:11:19.941 real 0m7.630s 00:11:19.941 user 0m12.904s 00:11:19.941 sys 0m1.572s 00:11:19.941 ************************************ 00:11:19.941 END TEST raid_superblock_test 00:11:19.941 ************************************ 00:11:19.941 03:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:19.941 03:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.941 03:11:23 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:19.941 03:11:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:19.941 03:11:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:19.941 03:11:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:19.941 ************************************ 00:11:19.941 START TEST raid_read_error_test 00:11:19.941 ************************************ 00:11:19.941 03:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:11:19.941 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:19.941 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:19.941 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:19.941 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TUQM8UP7nj 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85916 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85916 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 85916 ']' 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:19.942 03:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.942 [2024-11-18 03:11:23.503474] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:19.942 [2024-11-18 03:11:23.503723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85916 ] 00:11:20.214 [2024-11-18 03:11:23.665689] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.214 [2024-11-18 03:11:23.716190] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.214 [2024-11-18 03:11:23.758914] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.214 [2024-11-18 03:11:23.758988] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.784 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:20.784 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:20.784 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:20.784 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:20.784 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.784 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.784 BaseBdev1_malloc 00:11:20.784 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.784 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:20.784 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.784 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.784 true 00:11:20.784 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.784 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:20.784 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.784 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.045 [2024-11-18 03:11:24.361753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:21.045 [2024-11-18 03:11:24.361821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.045 [2024-11-18 03:11:24.361862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:21.045 [2024-11-18 03:11:24.361871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.045 [2024-11-18 03:11:24.364248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.045 [2024-11-18 03:11:24.364287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:21.045 BaseBdev1 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.045 BaseBdev2_malloc 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.045 true 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.045 [2024-11-18 03:11:24.411973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:21.045 [2024-11-18 03:11:24.412030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.045 [2024-11-18 03:11:24.412069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:21.045 [2024-11-18 03:11:24.412078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.045 [2024-11-18 03:11:24.414203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.045 [2024-11-18 03:11:24.414241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:21.045 BaseBdev2 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.045 BaseBdev3_malloc 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.045 true 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.045 [2024-11-18 03:11:24.452707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:21.045 [2024-11-18 03:11:24.452763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.045 [2024-11-18 03:11:24.452781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:21.045 [2024-11-18 03:11:24.452790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.045 [2024-11-18 03:11:24.454909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.045 [2024-11-18 03:11:24.454955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:21.045 BaseBdev3 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.045 BaseBdev4_malloc 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.045 true 00:11:21.045 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.046 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:21.046 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.046 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.046 [2024-11-18 03:11:24.493453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:21.046 [2024-11-18 03:11:24.493509] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.046 [2024-11-18 03:11:24.493532] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:21.046 [2024-11-18 03:11:24.493540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.046 [2024-11-18 03:11:24.495680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.046 [2024-11-18 03:11:24.495723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:21.046 BaseBdev4 00:11:21.046 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.046 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:21.046 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.046 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.046 [2024-11-18 03:11:24.505485] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:21.046 [2024-11-18 03:11:24.507457] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:21.046 [2024-11-18 03:11:24.507547] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:21.046 [2024-11-18 03:11:24.507603] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:21.046 [2024-11-18 03:11:24.507814] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:11:21.046 [2024-11-18 03:11:24.507825] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:21.046 [2024-11-18 03:11:24.508127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:21.046 [2024-11-18 03:11:24.508274] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:11:21.046 [2024-11-18 03:11:24.508294] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:11:21.046 [2024-11-18 03:11:24.508442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.046 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.046 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:21.046 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.046 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.046 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.046 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.046 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.046 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.046 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.046 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.046 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.046 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.046 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.046 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.046 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.046 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.046 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.046 "name": "raid_bdev1", 00:11:21.046 "uuid": "c323b502-aa5a-4563-871e-cf60784f0e4f", 00:11:21.046 "strip_size_kb": 0, 00:11:21.046 "state": "online", 00:11:21.046 "raid_level": "raid1", 00:11:21.046 "superblock": true, 00:11:21.046 "num_base_bdevs": 4, 00:11:21.046 "num_base_bdevs_discovered": 4, 00:11:21.046 "num_base_bdevs_operational": 4, 00:11:21.046 "base_bdevs_list": [ 00:11:21.046 { 00:11:21.046 "name": "BaseBdev1", 00:11:21.046 "uuid": "1525aa19-8cf6-52fb-b987-e34e1d7990af", 00:11:21.046 "is_configured": true, 00:11:21.046 "data_offset": 2048, 00:11:21.046 "data_size": 63488 00:11:21.046 }, 00:11:21.046 { 00:11:21.046 "name": "BaseBdev2", 00:11:21.046 "uuid": "ecff8060-320a-5376-ac01-c947b1f45da3", 00:11:21.046 "is_configured": true, 00:11:21.046 "data_offset": 2048, 00:11:21.046 "data_size": 63488 00:11:21.046 }, 00:11:21.046 { 00:11:21.046 "name": "BaseBdev3", 00:11:21.046 "uuid": "3b4a9161-245d-520a-89de-d27765257bb8", 00:11:21.046 "is_configured": true, 00:11:21.046 "data_offset": 2048, 00:11:21.046 "data_size": 63488 00:11:21.046 }, 00:11:21.046 { 00:11:21.046 "name": "BaseBdev4", 00:11:21.046 "uuid": "832b0b7c-3e7b-5519-ab43-5ebe4a97068b", 00:11:21.046 "is_configured": true, 00:11:21.046 "data_offset": 2048, 00:11:21.046 "data_size": 63488 00:11:21.046 } 00:11:21.046 ] 00:11:21.046 }' 00:11:21.046 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.046 03:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.615 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:21.615 03:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:21.615 [2024-11-18 03:11:25.028946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:22.554 03:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:22.554 03:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.554 03:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.554 03:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.554 03:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:22.554 03:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:22.554 03:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:22.554 03:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:22.554 03:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:22.554 03:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:22.554 03:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.554 03:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.554 03:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.554 03:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.554 03:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.554 03:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.554 03:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.554 03:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.554 03:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.554 03:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.554 03:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.554 03:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.554 03:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.554 03:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.554 "name": "raid_bdev1", 00:11:22.554 "uuid": "c323b502-aa5a-4563-871e-cf60784f0e4f", 00:11:22.554 "strip_size_kb": 0, 00:11:22.554 "state": "online", 00:11:22.554 "raid_level": "raid1", 00:11:22.554 "superblock": true, 00:11:22.554 "num_base_bdevs": 4, 00:11:22.554 "num_base_bdevs_discovered": 4, 00:11:22.554 "num_base_bdevs_operational": 4, 00:11:22.554 "base_bdevs_list": [ 00:11:22.554 { 00:11:22.554 "name": "BaseBdev1", 00:11:22.554 "uuid": "1525aa19-8cf6-52fb-b987-e34e1d7990af", 00:11:22.554 "is_configured": true, 00:11:22.554 "data_offset": 2048, 00:11:22.554 "data_size": 63488 00:11:22.554 }, 00:11:22.554 { 00:11:22.554 "name": "BaseBdev2", 00:11:22.554 "uuid": "ecff8060-320a-5376-ac01-c947b1f45da3", 00:11:22.554 "is_configured": true, 00:11:22.554 "data_offset": 2048, 00:11:22.554 "data_size": 63488 00:11:22.554 }, 00:11:22.554 { 00:11:22.554 "name": "BaseBdev3", 00:11:22.554 "uuid": "3b4a9161-245d-520a-89de-d27765257bb8", 00:11:22.554 "is_configured": true, 00:11:22.554 "data_offset": 2048, 00:11:22.554 "data_size": 63488 00:11:22.555 }, 00:11:22.555 { 00:11:22.555 "name": "BaseBdev4", 00:11:22.555 "uuid": "832b0b7c-3e7b-5519-ab43-5ebe4a97068b", 00:11:22.555 "is_configured": true, 00:11:22.555 "data_offset": 2048, 00:11:22.555 "data_size": 63488 00:11:22.555 } 00:11:22.555 ] 00:11:22.555 }' 00:11:22.555 03:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.555 03:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.125 03:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:23.125 03:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.125 03:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.125 [2024-11-18 03:11:26.408047] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:23.125 [2024-11-18 03:11:26.408154] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:23.125 [2024-11-18 03:11:26.410785] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.125 [2024-11-18 03:11:26.410870] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.125 [2024-11-18 03:11:26.411035] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:23.125 [2024-11-18 03:11:26.411084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:11:23.125 03:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.125 { 00:11:23.125 "results": [ 00:11:23.125 { 00:11:23.125 "job": "raid_bdev1", 00:11:23.125 "core_mask": "0x1", 00:11:23.125 "workload": "randrw", 00:11:23.125 "percentage": 50, 00:11:23.125 "status": "finished", 00:11:23.125 "queue_depth": 1, 00:11:23.125 "io_size": 131072, 00:11:23.125 "runtime": 1.380005, 00:11:23.125 "iops": 11113.727848812143, 00:11:23.125 "mibps": 1389.2159811015179, 00:11:23.125 "io_failed": 0, 00:11:23.125 "io_timeout": 0, 00:11:23.125 "avg_latency_us": 87.3996710298724, 00:11:23.125 "min_latency_us": 23.475982532751093, 00:11:23.125 "max_latency_us": 1545.3903930131005 00:11:23.125 } 00:11:23.125 ], 00:11:23.125 "core_count": 1 00:11:23.125 } 00:11:23.125 03:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85916 00:11:23.125 03:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 85916 ']' 00:11:23.125 03:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 85916 00:11:23.125 03:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:23.125 03:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:23.126 03:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85916 00:11:23.126 03:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:23.126 killing process with pid 85916 00:11:23.126 03:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:23.126 03:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85916' 00:11:23.126 03:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 85916 00:11:23.126 [2024-11-18 03:11:26.458726] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:23.126 03:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 85916 00:11:23.126 [2024-11-18 03:11:26.495895] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:23.386 03:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TUQM8UP7nj 00:11:23.386 03:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:23.386 03:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:23.386 03:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:23.386 03:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:23.386 03:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:23.386 03:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:23.386 03:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:23.386 00:11:23.386 real 0m3.343s 00:11:23.386 user 0m4.208s 00:11:23.386 sys 0m0.556s 00:11:23.386 03:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:23.386 03:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.386 ************************************ 00:11:23.386 END TEST raid_read_error_test 00:11:23.386 ************************************ 00:11:23.386 03:11:26 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:11:23.386 03:11:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:23.386 03:11:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:23.386 03:11:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:23.386 ************************************ 00:11:23.386 START TEST raid_write_error_test 00:11:23.386 ************************************ 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3gJZlybI1Y 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=86045 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 86045 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 86045 ']' 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.386 03:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:23.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.387 03:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.387 03:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:23.387 03:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.387 [2024-11-18 03:11:26.912895] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:23.387 [2024-11-18 03:11:26.913052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86045 ] 00:11:23.646 [2024-11-18 03:11:27.075515] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.646 [2024-11-18 03:11:27.126079] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.646 [2024-11-18 03:11:27.168319] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.646 [2024-11-18 03:11:27.168370] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.218 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:24.218 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:24.218 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:24.218 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:24.218 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.218 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.218 BaseBdev1_malloc 00:11:24.218 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.218 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:24.218 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.218 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.218 true 00:11:24.218 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.218 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:24.218 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.218 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.479 [2024-11-18 03:11:27.794898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:24.479 [2024-11-18 03:11:27.794988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.479 [2024-11-18 03:11:27.795010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:24.479 [2024-11-18 03:11:27.795027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.479 [2024-11-18 03:11:27.797346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.479 [2024-11-18 03:11:27.797389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:24.479 BaseBdev1 00:11:24.479 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.479 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:24.479 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:24.479 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.479 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.479 BaseBdev2_malloc 00:11:24.479 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.479 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:24.479 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.479 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.479 true 00:11:24.479 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.479 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:24.479 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.479 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.480 [2024-11-18 03:11:27.844055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:24.480 [2024-11-18 03:11:27.844110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.480 [2024-11-18 03:11:27.844131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:24.480 [2024-11-18 03:11:27.844140] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.480 [2024-11-18 03:11:27.846296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.480 [2024-11-18 03:11:27.846333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:24.480 BaseBdev2 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.480 BaseBdev3_malloc 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.480 true 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.480 [2024-11-18 03:11:27.884945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:24.480 [2024-11-18 03:11:27.885010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.480 [2024-11-18 03:11:27.885031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:24.480 [2024-11-18 03:11:27.885040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.480 [2024-11-18 03:11:27.887184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.480 [2024-11-18 03:11:27.887220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:24.480 BaseBdev3 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.480 BaseBdev4_malloc 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.480 true 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.480 [2024-11-18 03:11:27.925666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:24.480 [2024-11-18 03:11:27.925733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.480 [2024-11-18 03:11:27.925756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:24.480 [2024-11-18 03:11:27.925766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.480 [2024-11-18 03:11:27.927839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.480 [2024-11-18 03:11:27.927874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:24.480 BaseBdev4 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.480 [2024-11-18 03:11:27.937692] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:24.480 [2024-11-18 03:11:27.939587] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:24.480 [2024-11-18 03:11:27.939684] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:24.480 [2024-11-18 03:11:27.939741] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:24.480 [2024-11-18 03:11:27.939946] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:11:24.480 [2024-11-18 03:11:27.939973] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:24.480 [2024-11-18 03:11:27.940242] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:24.480 [2024-11-18 03:11:27.940398] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:11:24.480 [2024-11-18 03:11:27.940420] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:11:24.480 [2024-11-18 03:11:27.940559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.480 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.480 "name": "raid_bdev1", 00:11:24.480 "uuid": "499556df-1b00-47f2-8e40-68da7fbe9adb", 00:11:24.480 "strip_size_kb": 0, 00:11:24.480 "state": "online", 00:11:24.480 "raid_level": "raid1", 00:11:24.480 "superblock": true, 00:11:24.480 "num_base_bdevs": 4, 00:11:24.480 "num_base_bdevs_discovered": 4, 00:11:24.480 "num_base_bdevs_operational": 4, 00:11:24.480 "base_bdevs_list": [ 00:11:24.480 { 00:11:24.480 "name": "BaseBdev1", 00:11:24.480 "uuid": "f2487339-ea43-5538-9a74-76e25b2985c0", 00:11:24.480 "is_configured": true, 00:11:24.480 "data_offset": 2048, 00:11:24.480 "data_size": 63488 00:11:24.480 }, 00:11:24.480 { 00:11:24.480 "name": "BaseBdev2", 00:11:24.480 "uuid": "eef4d7d6-93cb-5a13-894f-2af4f0701f5b", 00:11:24.480 "is_configured": true, 00:11:24.480 "data_offset": 2048, 00:11:24.480 "data_size": 63488 00:11:24.480 }, 00:11:24.480 { 00:11:24.480 "name": "BaseBdev3", 00:11:24.480 "uuid": "09002644-3c10-5cef-9f7a-64e64f680aca", 00:11:24.481 "is_configured": true, 00:11:24.481 "data_offset": 2048, 00:11:24.481 "data_size": 63488 00:11:24.481 }, 00:11:24.481 { 00:11:24.481 "name": "BaseBdev4", 00:11:24.481 "uuid": "5ebc4c98-d548-51cc-b49a-696acc2a873e", 00:11:24.481 "is_configured": true, 00:11:24.481 "data_offset": 2048, 00:11:24.481 "data_size": 63488 00:11:24.481 } 00:11:24.481 ] 00:11:24.481 }' 00:11:24.481 03:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.481 03:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.743 03:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:24.743 03:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:25.002 [2024-11-18 03:11:28.417239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:25.941 03:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:25.941 03:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.941 03:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.941 [2024-11-18 03:11:29.328878] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:25.941 [2024-11-18 03:11:29.328934] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:25.941 [2024-11-18 03:11:29.329191] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:11:25.941 03:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.941 03:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:25.941 03:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:25.941 03:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:25.941 03:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:25.941 03:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:25.941 03:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.941 03:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.941 03:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.941 03:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.941 03:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.941 03:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.941 03:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.941 03:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.941 03:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.941 03:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.941 03:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.941 03:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.941 03:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.941 03:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.941 03:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.941 "name": "raid_bdev1", 00:11:25.941 "uuid": "499556df-1b00-47f2-8e40-68da7fbe9adb", 00:11:25.941 "strip_size_kb": 0, 00:11:25.941 "state": "online", 00:11:25.941 "raid_level": "raid1", 00:11:25.941 "superblock": true, 00:11:25.941 "num_base_bdevs": 4, 00:11:25.941 "num_base_bdevs_discovered": 3, 00:11:25.941 "num_base_bdevs_operational": 3, 00:11:25.941 "base_bdevs_list": [ 00:11:25.941 { 00:11:25.941 "name": null, 00:11:25.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.941 "is_configured": false, 00:11:25.941 "data_offset": 0, 00:11:25.941 "data_size": 63488 00:11:25.941 }, 00:11:25.941 { 00:11:25.941 "name": "BaseBdev2", 00:11:25.941 "uuid": "eef4d7d6-93cb-5a13-894f-2af4f0701f5b", 00:11:25.941 "is_configured": true, 00:11:25.941 "data_offset": 2048, 00:11:25.941 "data_size": 63488 00:11:25.941 }, 00:11:25.941 { 00:11:25.941 "name": "BaseBdev3", 00:11:25.941 "uuid": "09002644-3c10-5cef-9f7a-64e64f680aca", 00:11:25.941 "is_configured": true, 00:11:25.941 "data_offset": 2048, 00:11:25.941 "data_size": 63488 00:11:25.941 }, 00:11:25.942 { 00:11:25.942 "name": "BaseBdev4", 00:11:25.942 "uuid": "5ebc4c98-d548-51cc-b49a-696acc2a873e", 00:11:25.942 "is_configured": true, 00:11:25.942 "data_offset": 2048, 00:11:25.942 "data_size": 63488 00:11:25.942 } 00:11:25.942 ] 00:11:25.942 }' 00:11:25.942 03:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.942 03:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.511 03:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:26.511 03:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.511 03:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.511 [2024-11-18 03:11:29.800489] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:26.511 [2024-11-18 03:11:29.800530] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.511 [2024-11-18 03:11:29.803151] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.511 [2024-11-18 03:11:29.803227] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.511 [2024-11-18 03:11:29.803355] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:26.511 [2024-11-18 03:11:29.803374] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:11:26.511 { 00:11:26.511 "results": [ 00:11:26.511 { 00:11:26.511 "job": "raid_bdev1", 00:11:26.512 "core_mask": "0x1", 00:11:26.512 "workload": "randrw", 00:11:26.512 "percentage": 50, 00:11:26.512 "status": "finished", 00:11:26.512 "queue_depth": 1, 00:11:26.512 "io_size": 131072, 00:11:26.512 "runtime": 1.383829, 00:11:26.512 "iops": 11838.167866116406, 00:11:26.512 "mibps": 1479.7709832645508, 00:11:26.512 "io_failed": 0, 00:11:26.512 "io_timeout": 0, 00:11:26.512 "avg_latency_us": 81.77714959277382, 00:11:26.512 "min_latency_us": 23.58777292576419, 00:11:26.512 "max_latency_us": 1574.0087336244542 00:11:26.512 } 00:11:26.512 ], 00:11:26.512 "core_count": 1 00:11:26.512 } 00:11:26.512 03:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.512 03:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 86045 00:11:26.512 03:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 86045 ']' 00:11:26.512 03:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 86045 00:11:26.512 03:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:26.512 03:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:26.512 03:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86045 00:11:26.512 03:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:26.512 03:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:26.512 killing process with pid 86045 00:11:26.512 03:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86045' 00:11:26.512 03:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 86045 00:11:26.512 [2024-11-18 03:11:29.849329] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:26.512 03:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 86045 00:11:26.512 [2024-11-18 03:11:29.886164] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:26.772 03:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3gJZlybI1Y 00:11:26.772 03:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:26.772 03:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:26.772 03:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:26.772 03:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:26.772 03:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:26.772 03:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:26.772 03:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:26.772 00:11:26.772 real 0m3.320s 00:11:26.772 user 0m4.159s 00:11:26.772 sys 0m0.556s 00:11:26.772 03:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:26.772 03:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.772 ************************************ 00:11:26.772 END TEST raid_write_error_test 00:11:26.772 ************************************ 00:11:26.772 03:11:30 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:11:26.772 03:11:30 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:26.772 03:11:30 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:11:26.772 03:11:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:26.772 03:11:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:26.772 03:11:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:26.772 ************************************ 00:11:26.772 START TEST raid_rebuild_test 00:11:26.772 ************************************ 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=86172 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 86172 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 86172 ']' 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:26.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:26.772 03:11:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.772 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:26.772 Zero copy mechanism will not be used. 00:11:26.772 [2024-11-18 03:11:30.295700] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:26.772 [2024-11-18 03:11:30.295832] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86172 ] 00:11:27.032 [2024-11-18 03:11:30.456190] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.032 [2024-11-18 03:11:30.507090] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.032 [2024-11-18 03:11:30.549285] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.032 [2024-11-18 03:11:30.549331] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.602 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:27.602 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:11:27.602 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:27.602 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:27.602 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.602 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.602 BaseBdev1_malloc 00:11:27.602 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.602 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:27.602 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.602 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.602 [2024-11-18 03:11:31.167875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:27.602 [2024-11-18 03:11:31.167970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.602 [2024-11-18 03:11:31.168006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:27.602 [2024-11-18 03:11:31.168028] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.602 [2024-11-18 03:11:31.170196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.602 [2024-11-18 03:11:31.170231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:27.602 BaseBdev1 00:11:27.602 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.602 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:27.602 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:27.602 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.602 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.863 BaseBdev2_malloc 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.863 [2024-11-18 03:11:31.206407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:27.863 [2024-11-18 03:11:31.206478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.863 [2024-11-18 03:11:31.206502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:27.863 [2024-11-18 03:11:31.206512] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.863 [2024-11-18 03:11:31.208963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.863 [2024-11-18 03:11:31.209013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:27.863 BaseBdev2 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.863 spare_malloc 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.863 spare_delay 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.863 [2024-11-18 03:11:31.247105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:27.863 [2024-11-18 03:11:31.247182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.863 [2024-11-18 03:11:31.247207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:27.863 [2024-11-18 03:11:31.247216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.863 [2024-11-18 03:11:31.249345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.863 [2024-11-18 03:11:31.249382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:27.863 spare 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.863 [2024-11-18 03:11:31.259105] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.863 [2024-11-18 03:11:31.260958] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:27.863 [2024-11-18 03:11:31.261064] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:27.863 [2024-11-18 03:11:31.261077] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:27.863 [2024-11-18 03:11:31.261362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:27.863 [2024-11-18 03:11:31.261492] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:27.863 [2024-11-18 03:11:31.261517] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:27.863 [2024-11-18 03:11:31.261640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.863 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.864 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.864 "name": "raid_bdev1", 00:11:27.864 "uuid": "68ab8188-bc3f-4922-9aa0-2c80c3f248a4", 00:11:27.864 "strip_size_kb": 0, 00:11:27.864 "state": "online", 00:11:27.864 "raid_level": "raid1", 00:11:27.864 "superblock": false, 00:11:27.864 "num_base_bdevs": 2, 00:11:27.864 "num_base_bdevs_discovered": 2, 00:11:27.864 "num_base_bdevs_operational": 2, 00:11:27.864 "base_bdevs_list": [ 00:11:27.864 { 00:11:27.864 "name": "BaseBdev1", 00:11:27.864 "uuid": "07b7a053-2297-5c49-9a08-7aaf13c5aeab", 00:11:27.864 "is_configured": true, 00:11:27.864 "data_offset": 0, 00:11:27.864 "data_size": 65536 00:11:27.864 }, 00:11:27.864 { 00:11:27.864 "name": "BaseBdev2", 00:11:27.864 "uuid": "bcf8d8de-a26c-5d0d-a3ea-17a00ce9196e", 00:11:27.864 "is_configured": true, 00:11:27.864 "data_offset": 0, 00:11:27.864 "data_size": 65536 00:11:27.864 } 00:11:27.864 ] 00:11:27.864 }' 00:11:27.864 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.864 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.434 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:28.434 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:28.434 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.434 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.434 [2024-11-18 03:11:31.714752] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:28.434 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.434 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:28.434 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:28.434 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.434 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.434 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.434 03:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.435 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:28.435 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:28.435 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:28.435 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:28.435 03:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:28.435 03:11:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:28.435 03:11:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:28.435 03:11:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:28.435 03:11:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:28.435 03:11:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:28.435 03:11:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:28.435 03:11:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:28.435 03:11:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:28.435 03:11:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:28.435 [2024-11-18 03:11:31.962073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:28.435 /dev/nbd0 00:11:28.435 03:11:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:28.435 03:11:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:28.435 03:11:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:28.435 03:11:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:28.435 03:11:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:28.435 03:11:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:28.435 03:11:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:28.695 03:11:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:28.695 03:11:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:28.695 03:11:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:28.695 03:11:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.695 1+0 records in 00:11:28.695 1+0 records out 00:11:28.695 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381675 s, 10.7 MB/s 00:11:28.695 03:11:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.695 03:11:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:28.695 03:11:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.695 03:11:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:28.695 03:11:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:28.695 03:11:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:28.695 03:11:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:28.695 03:11:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:28.695 03:11:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:28.695 03:11:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:32.896 65536+0 records in 00:11:32.896 65536+0 records out 00:11:32.896 33554432 bytes (34 MB, 32 MiB) copied, 3.81509 s, 8.8 MB/s 00:11:32.896 03:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:32.896 03:11:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:32.896 03:11:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:32.896 03:11:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:32.896 03:11:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:32.896 03:11:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:32.896 03:11:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:32.896 03:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:32.896 [2024-11-18 03:11:36.075107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.896 03:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:32.896 03:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:32.896 03:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:32.896 03:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:32.896 03:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:32.896 03:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:32.896 03:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:32.896 03:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:32.896 03:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.896 03:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.896 [2024-11-18 03:11:36.095185] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:32.896 03:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.896 03:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:32.896 03:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:32.896 03:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.896 03:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.896 03:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.896 03:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:32.896 03:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.896 03:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.896 03:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.896 03:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.896 03:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.896 03:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.896 03:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.896 03:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.896 03:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.896 03:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.896 "name": "raid_bdev1", 00:11:32.896 "uuid": "68ab8188-bc3f-4922-9aa0-2c80c3f248a4", 00:11:32.896 "strip_size_kb": 0, 00:11:32.896 "state": "online", 00:11:32.896 "raid_level": "raid1", 00:11:32.896 "superblock": false, 00:11:32.896 "num_base_bdevs": 2, 00:11:32.896 "num_base_bdevs_discovered": 1, 00:11:32.896 "num_base_bdevs_operational": 1, 00:11:32.896 "base_bdevs_list": [ 00:11:32.896 { 00:11:32.896 "name": null, 00:11:32.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.896 "is_configured": false, 00:11:32.896 "data_offset": 0, 00:11:32.896 "data_size": 65536 00:11:32.896 }, 00:11:32.896 { 00:11:32.896 "name": "BaseBdev2", 00:11:32.896 "uuid": "bcf8d8de-a26c-5d0d-a3ea-17a00ce9196e", 00:11:32.896 "is_configured": true, 00:11:32.896 "data_offset": 0, 00:11:32.896 "data_size": 65536 00:11:32.896 } 00:11:32.896 ] 00:11:32.896 }' 00:11:32.896 03:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.896 03:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.156 03:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:33.156 03:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.156 03:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.156 [2024-11-18 03:11:36.518581] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:33.156 [2024-11-18 03:11:36.522862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09a30 00:11:33.156 03:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.156 03:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:33.156 [2024-11-18 03:11:36.524897] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:34.113 03:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:34.113 03:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:34.113 03:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:34.113 03:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:34.113 03:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:34.113 03:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.113 03:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.113 03:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.113 03:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.113 03:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.113 03:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:34.113 "name": "raid_bdev1", 00:11:34.113 "uuid": "68ab8188-bc3f-4922-9aa0-2c80c3f248a4", 00:11:34.113 "strip_size_kb": 0, 00:11:34.113 "state": "online", 00:11:34.113 "raid_level": "raid1", 00:11:34.113 "superblock": false, 00:11:34.113 "num_base_bdevs": 2, 00:11:34.113 "num_base_bdevs_discovered": 2, 00:11:34.113 "num_base_bdevs_operational": 2, 00:11:34.113 "process": { 00:11:34.113 "type": "rebuild", 00:11:34.113 "target": "spare", 00:11:34.113 "progress": { 00:11:34.113 "blocks": 20480, 00:11:34.113 "percent": 31 00:11:34.113 } 00:11:34.113 }, 00:11:34.113 "base_bdevs_list": [ 00:11:34.113 { 00:11:34.113 "name": "spare", 00:11:34.113 "uuid": "f009a5b2-2629-5b75-8adc-194a0ecf0003", 00:11:34.113 "is_configured": true, 00:11:34.113 "data_offset": 0, 00:11:34.113 "data_size": 65536 00:11:34.113 }, 00:11:34.113 { 00:11:34.113 "name": "BaseBdev2", 00:11:34.113 "uuid": "bcf8d8de-a26c-5d0d-a3ea-17a00ce9196e", 00:11:34.113 "is_configured": true, 00:11:34.113 "data_offset": 0, 00:11:34.113 "data_size": 65536 00:11:34.113 } 00:11:34.113 ] 00:11:34.113 }' 00:11:34.113 03:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:34.113 03:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:34.113 03:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:34.113 03:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:34.113 03:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:34.113 03:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.113 03:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.113 [2024-11-18 03:11:37.638266] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:34.373 [2024-11-18 03:11:37.730134] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:34.373 [2024-11-18 03:11:37.730198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.373 [2024-11-18 03:11:37.730216] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:34.373 [2024-11-18 03:11:37.730223] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:34.373 03:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.373 03:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:34.373 03:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.373 03:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.373 03:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.373 03:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.373 03:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:34.373 03:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.373 03:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.373 03:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.373 03:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.373 03:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.373 03:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.373 03:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.373 03:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.373 03:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.373 03:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.373 "name": "raid_bdev1", 00:11:34.373 "uuid": "68ab8188-bc3f-4922-9aa0-2c80c3f248a4", 00:11:34.373 "strip_size_kb": 0, 00:11:34.373 "state": "online", 00:11:34.373 "raid_level": "raid1", 00:11:34.373 "superblock": false, 00:11:34.373 "num_base_bdevs": 2, 00:11:34.373 "num_base_bdevs_discovered": 1, 00:11:34.373 "num_base_bdevs_operational": 1, 00:11:34.373 "base_bdevs_list": [ 00:11:34.373 { 00:11:34.373 "name": null, 00:11:34.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.373 "is_configured": false, 00:11:34.373 "data_offset": 0, 00:11:34.373 "data_size": 65536 00:11:34.373 }, 00:11:34.373 { 00:11:34.373 "name": "BaseBdev2", 00:11:34.373 "uuid": "bcf8d8de-a26c-5d0d-a3ea-17a00ce9196e", 00:11:34.373 "is_configured": true, 00:11:34.373 "data_offset": 0, 00:11:34.373 "data_size": 65536 00:11:34.373 } 00:11:34.373 ] 00:11:34.373 }' 00:11:34.373 03:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.373 03:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.633 03:11:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:34.633 03:11:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:34.633 03:11:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:34.633 03:11:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:34.633 03:11:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:34.633 03:11:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.633 03:11:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.633 03:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.633 03:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.633 03:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.893 03:11:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:34.893 "name": "raid_bdev1", 00:11:34.893 "uuid": "68ab8188-bc3f-4922-9aa0-2c80c3f248a4", 00:11:34.893 "strip_size_kb": 0, 00:11:34.893 "state": "online", 00:11:34.893 "raid_level": "raid1", 00:11:34.893 "superblock": false, 00:11:34.893 "num_base_bdevs": 2, 00:11:34.893 "num_base_bdevs_discovered": 1, 00:11:34.893 "num_base_bdevs_operational": 1, 00:11:34.893 "base_bdevs_list": [ 00:11:34.893 { 00:11:34.893 "name": null, 00:11:34.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.893 "is_configured": false, 00:11:34.893 "data_offset": 0, 00:11:34.893 "data_size": 65536 00:11:34.893 }, 00:11:34.893 { 00:11:34.893 "name": "BaseBdev2", 00:11:34.893 "uuid": "bcf8d8de-a26c-5d0d-a3ea-17a00ce9196e", 00:11:34.893 "is_configured": true, 00:11:34.893 "data_offset": 0, 00:11:34.893 "data_size": 65536 00:11:34.893 } 00:11:34.893 ] 00:11:34.893 }' 00:11:34.893 03:11:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:34.893 03:11:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:34.893 03:11:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:34.893 03:11:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:34.893 03:11:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:34.893 03:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.893 03:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.893 [2024-11-18 03:11:38.329898] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:34.893 [2024-11-18 03:11:38.334165] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09b00 00:11:34.893 03:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.893 03:11:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:34.893 [2024-11-18 03:11:38.336223] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:35.833 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:35.833 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:35.833 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:35.833 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:35.833 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:35.833 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.833 03:11:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.833 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.833 03:11:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.833 03:11:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.833 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:35.833 "name": "raid_bdev1", 00:11:35.833 "uuid": "68ab8188-bc3f-4922-9aa0-2c80c3f248a4", 00:11:35.833 "strip_size_kb": 0, 00:11:35.833 "state": "online", 00:11:35.833 "raid_level": "raid1", 00:11:35.833 "superblock": false, 00:11:35.833 "num_base_bdevs": 2, 00:11:35.833 "num_base_bdevs_discovered": 2, 00:11:35.833 "num_base_bdevs_operational": 2, 00:11:35.833 "process": { 00:11:35.833 "type": "rebuild", 00:11:35.833 "target": "spare", 00:11:35.833 "progress": { 00:11:35.833 "blocks": 20480, 00:11:35.833 "percent": 31 00:11:35.833 } 00:11:35.833 }, 00:11:35.833 "base_bdevs_list": [ 00:11:35.833 { 00:11:35.833 "name": "spare", 00:11:35.833 "uuid": "f009a5b2-2629-5b75-8adc-194a0ecf0003", 00:11:35.833 "is_configured": true, 00:11:35.833 "data_offset": 0, 00:11:35.833 "data_size": 65536 00:11:35.833 }, 00:11:35.833 { 00:11:35.833 "name": "BaseBdev2", 00:11:35.833 "uuid": "bcf8d8de-a26c-5d0d-a3ea-17a00ce9196e", 00:11:35.833 "is_configured": true, 00:11:35.833 "data_offset": 0, 00:11:35.833 "data_size": 65536 00:11:35.833 } 00:11:35.833 ] 00:11:35.833 }' 00:11:35.833 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:36.093 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:36.093 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:36.093 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:36.093 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:36.093 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:36.093 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:36.093 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:36.093 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=293 00:11:36.093 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:36.093 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:36.093 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:36.093 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:36.093 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:36.093 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:36.093 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.093 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.093 03:11:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.093 03:11:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.093 03:11:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.093 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:36.093 "name": "raid_bdev1", 00:11:36.093 "uuid": "68ab8188-bc3f-4922-9aa0-2c80c3f248a4", 00:11:36.093 "strip_size_kb": 0, 00:11:36.093 "state": "online", 00:11:36.093 "raid_level": "raid1", 00:11:36.093 "superblock": false, 00:11:36.093 "num_base_bdevs": 2, 00:11:36.093 "num_base_bdevs_discovered": 2, 00:11:36.093 "num_base_bdevs_operational": 2, 00:11:36.093 "process": { 00:11:36.093 "type": "rebuild", 00:11:36.093 "target": "spare", 00:11:36.093 "progress": { 00:11:36.093 "blocks": 22528, 00:11:36.093 "percent": 34 00:11:36.093 } 00:11:36.093 }, 00:11:36.093 "base_bdevs_list": [ 00:11:36.093 { 00:11:36.093 "name": "spare", 00:11:36.093 "uuid": "f009a5b2-2629-5b75-8adc-194a0ecf0003", 00:11:36.093 "is_configured": true, 00:11:36.093 "data_offset": 0, 00:11:36.093 "data_size": 65536 00:11:36.093 }, 00:11:36.093 { 00:11:36.093 "name": "BaseBdev2", 00:11:36.093 "uuid": "bcf8d8de-a26c-5d0d-a3ea-17a00ce9196e", 00:11:36.093 "is_configured": true, 00:11:36.093 "data_offset": 0, 00:11:36.093 "data_size": 65536 00:11:36.093 } 00:11:36.093 ] 00:11:36.093 }' 00:11:36.093 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:36.093 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:36.093 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:36.093 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:36.093 03:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:37.473 03:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:37.473 03:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:37.473 03:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:37.473 03:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:37.473 03:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:37.473 03:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:37.473 03:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.473 03:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.473 03:11:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.473 03:11:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.473 03:11:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.473 03:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:37.473 "name": "raid_bdev1", 00:11:37.473 "uuid": "68ab8188-bc3f-4922-9aa0-2c80c3f248a4", 00:11:37.473 "strip_size_kb": 0, 00:11:37.473 "state": "online", 00:11:37.473 "raid_level": "raid1", 00:11:37.473 "superblock": false, 00:11:37.473 "num_base_bdevs": 2, 00:11:37.473 "num_base_bdevs_discovered": 2, 00:11:37.473 "num_base_bdevs_operational": 2, 00:11:37.473 "process": { 00:11:37.473 "type": "rebuild", 00:11:37.473 "target": "spare", 00:11:37.473 "progress": { 00:11:37.473 "blocks": 45056, 00:11:37.473 "percent": 68 00:11:37.473 } 00:11:37.473 }, 00:11:37.473 "base_bdevs_list": [ 00:11:37.473 { 00:11:37.473 "name": "spare", 00:11:37.473 "uuid": "f009a5b2-2629-5b75-8adc-194a0ecf0003", 00:11:37.473 "is_configured": true, 00:11:37.473 "data_offset": 0, 00:11:37.473 "data_size": 65536 00:11:37.473 }, 00:11:37.473 { 00:11:37.473 "name": "BaseBdev2", 00:11:37.473 "uuid": "bcf8d8de-a26c-5d0d-a3ea-17a00ce9196e", 00:11:37.473 "is_configured": true, 00:11:37.473 "data_offset": 0, 00:11:37.473 "data_size": 65536 00:11:37.473 } 00:11:37.473 ] 00:11:37.473 }' 00:11:37.473 03:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:37.473 03:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:37.473 03:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:37.473 03:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:37.473 03:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:38.043 [2024-11-18 03:11:41.548248] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:38.043 [2024-11-18 03:11:41.548358] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:38.043 [2024-11-18 03:11:41.548404] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.304 03:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:38.304 03:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:38.304 03:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:38.304 03:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:38.304 03:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:38.304 03:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:38.304 03:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.304 03:11:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.304 03:11:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.304 03:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.304 03:11:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.304 03:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:38.304 "name": "raid_bdev1", 00:11:38.304 "uuid": "68ab8188-bc3f-4922-9aa0-2c80c3f248a4", 00:11:38.304 "strip_size_kb": 0, 00:11:38.304 "state": "online", 00:11:38.304 "raid_level": "raid1", 00:11:38.304 "superblock": false, 00:11:38.304 "num_base_bdevs": 2, 00:11:38.304 "num_base_bdevs_discovered": 2, 00:11:38.304 "num_base_bdevs_operational": 2, 00:11:38.304 "base_bdevs_list": [ 00:11:38.304 { 00:11:38.304 "name": "spare", 00:11:38.304 "uuid": "f009a5b2-2629-5b75-8adc-194a0ecf0003", 00:11:38.304 "is_configured": true, 00:11:38.304 "data_offset": 0, 00:11:38.304 "data_size": 65536 00:11:38.304 }, 00:11:38.304 { 00:11:38.304 "name": "BaseBdev2", 00:11:38.304 "uuid": "bcf8d8de-a26c-5d0d-a3ea-17a00ce9196e", 00:11:38.304 "is_configured": true, 00:11:38.304 "data_offset": 0, 00:11:38.304 "data_size": 65536 00:11:38.304 } 00:11:38.304 ] 00:11:38.304 }' 00:11:38.304 03:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:38.304 03:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:38.304 03:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:38.564 03:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:38.564 03:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:38.564 03:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:38.564 03:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:38.564 03:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:38.564 03:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:38.564 03:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:38.564 03:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.564 03:11:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.564 03:11:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.564 03:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.564 03:11:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.564 03:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:38.564 "name": "raid_bdev1", 00:11:38.564 "uuid": "68ab8188-bc3f-4922-9aa0-2c80c3f248a4", 00:11:38.564 "strip_size_kb": 0, 00:11:38.564 "state": "online", 00:11:38.564 "raid_level": "raid1", 00:11:38.564 "superblock": false, 00:11:38.564 "num_base_bdevs": 2, 00:11:38.564 "num_base_bdevs_discovered": 2, 00:11:38.564 "num_base_bdevs_operational": 2, 00:11:38.564 "base_bdevs_list": [ 00:11:38.564 { 00:11:38.564 "name": "spare", 00:11:38.564 "uuid": "f009a5b2-2629-5b75-8adc-194a0ecf0003", 00:11:38.564 "is_configured": true, 00:11:38.564 "data_offset": 0, 00:11:38.564 "data_size": 65536 00:11:38.564 }, 00:11:38.564 { 00:11:38.564 "name": "BaseBdev2", 00:11:38.564 "uuid": "bcf8d8de-a26c-5d0d-a3ea-17a00ce9196e", 00:11:38.564 "is_configured": true, 00:11:38.564 "data_offset": 0, 00:11:38.564 "data_size": 65536 00:11:38.564 } 00:11:38.564 ] 00:11:38.564 }' 00:11:38.564 03:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:38.564 03:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:38.564 03:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:38.564 03:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:38.564 03:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:38.564 03:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.564 03:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.564 03:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.564 03:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.564 03:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:38.564 03:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.564 03:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.564 03:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.564 03:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.564 03:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.564 03:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.564 03:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.564 03:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.564 03:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.564 03:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.564 "name": "raid_bdev1", 00:11:38.564 "uuid": "68ab8188-bc3f-4922-9aa0-2c80c3f248a4", 00:11:38.564 "strip_size_kb": 0, 00:11:38.564 "state": "online", 00:11:38.564 "raid_level": "raid1", 00:11:38.564 "superblock": false, 00:11:38.564 "num_base_bdevs": 2, 00:11:38.565 "num_base_bdevs_discovered": 2, 00:11:38.565 "num_base_bdevs_operational": 2, 00:11:38.565 "base_bdevs_list": [ 00:11:38.565 { 00:11:38.565 "name": "spare", 00:11:38.565 "uuid": "f009a5b2-2629-5b75-8adc-194a0ecf0003", 00:11:38.565 "is_configured": true, 00:11:38.565 "data_offset": 0, 00:11:38.565 "data_size": 65536 00:11:38.565 }, 00:11:38.565 { 00:11:38.565 "name": "BaseBdev2", 00:11:38.565 "uuid": "bcf8d8de-a26c-5d0d-a3ea-17a00ce9196e", 00:11:38.565 "is_configured": true, 00:11:38.565 "data_offset": 0, 00:11:38.565 "data_size": 65536 00:11:38.565 } 00:11:38.565 ] 00:11:38.565 }' 00:11:38.565 03:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.565 03:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.135 03:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:39.135 03:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.135 03:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.135 [2024-11-18 03:11:42.491174] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:39.135 [2024-11-18 03:11:42.491212] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:39.135 [2024-11-18 03:11:42.491294] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:39.135 [2024-11-18 03:11:42.491375] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:39.135 [2024-11-18 03:11:42.491396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:39.135 03:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.135 03:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.135 03:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.135 03:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.135 03:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:39.135 03:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.135 03:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:39.135 03:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:39.135 03:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:39.135 03:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:39.135 03:11:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:39.135 03:11:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:39.135 03:11:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:39.136 03:11:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:39.136 03:11:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:39.136 03:11:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:39.136 03:11:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:39.136 03:11:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:39.136 03:11:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:39.396 /dev/nbd0 00:11:39.396 03:11:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:39.396 03:11:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:39.396 03:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:39.396 03:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:39.396 03:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:39.397 03:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:39.397 03:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:39.397 03:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:39.397 03:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:39.397 03:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:39.397 03:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:39.397 1+0 records in 00:11:39.397 1+0 records out 00:11:39.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350519 s, 11.7 MB/s 00:11:39.397 03:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:39.397 03:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:39.397 03:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:39.397 03:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:39.397 03:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:39.397 03:11:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:39.397 03:11:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:39.397 03:11:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:39.657 /dev/nbd1 00:11:39.657 03:11:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:39.657 03:11:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:39.657 03:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:39.657 03:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:39.657 03:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:39.657 03:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:39.657 03:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:39.657 03:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:39.657 03:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:39.657 03:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:39.657 03:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:39.657 1+0 records in 00:11:39.657 1+0 records out 00:11:39.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047651 s, 8.6 MB/s 00:11:39.657 03:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:39.657 03:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:39.657 03:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:39.657 03:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:39.657 03:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:39.657 03:11:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:39.657 03:11:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:39.657 03:11:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:39.657 03:11:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:39.657 03:11:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:39.657 03:11:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:39.657 03:11:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:39.657 03:11:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:39.657 03:11:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:39.657 03:11:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:39.918 03:11:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:39.918 03:11:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:39.918 03:11:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:39.918 03:11:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:39.918 03:11:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:39.918 03:11:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:39.918 03:11:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:39.918 03:11:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:39.918 03:11:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:39.918 03:11:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:40.178 03:11:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:40.178 03:11:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:40.178 03:11:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:40.178 03:11:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:40.178 03:11:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:40.178 03:11:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:40.178 03:11:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:40.178 03:11:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:40.178 03:11:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:40.178 03:11:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 86172 00:11:40.178 03:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 86172 ']' 00:11:40.178 03:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 86172 00:11:40.178 03:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:11:40.178 03:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:40.178 03:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86172 00:11:40.178 03:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:40.178 03:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:40.178 killing process with pid 86172 00:11:40.178 03:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86172' 00:11:40.178 03:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 86172 00:11:40.178 Received shutdown signal, test time was about 60.000000 seconds 00:11:40.178 00:11:40.178 Latency(us) 00:11:40.178 [2024-11-18T03:11:43.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:40.178 [2024-11-18T03:11:43.755Z] =================================================================================================================== 00:11:40.178 [2024-11-18T03:11:43.756Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:40.179 [2024-11-18 03:11:43.614596] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:40.179 03:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 86172 00:11:40.179 [2024-11-18 03:11:43.646264] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:40.439 03:11:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:11:40.439 00:11:40.439 real 0m13.682s 00:11:40.439 user 0m15.748s 00:11:40.439 sys 0m3.037s 00:11:40.439 03:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:40.439 03:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.439 ************************************ 00:11:40.439 END TEST raid_rebuild_test 00:11:40.439 ************************************ 00:11:40.439 03:11:43 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:11:40.439 03:11:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:40.439 03:11:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:40.439 03:11:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:40.439 ************************************ 00:11:40.439 START TEST raid_rebuild_test_sb 00:11:40.439 ************************************ 00:11:40.439 03:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:11:40.439 03:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:40.439 03:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:40.439 03:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:40.439 03:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:40.439 03:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:40.439 03:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:40.439 03:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:40.439 03:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:40.439 03:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:40.439 03:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:40.439 03:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:40.439 03:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:40.440 03:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:40.440 03:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:40.440 03:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:40.440 03:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:40.440 03:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:40.440 03:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:40.440 03:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:40.440 03:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:40.440 03:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:40.440 03:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:40.440 03:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:40.440 03:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:40.440 03:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=86579 00:11:40.440 03:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:40.440 03:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 86579 00:11:40.440 03:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 86579 ']' 00:11:40.440 03:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.440 03:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:40.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.440 03:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.440 03:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:40.440 03:11:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.700 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:40.700 Zero copy mechanism will not be used. 00:11:40.700 [2024-11-18 03:11:44.051063] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:40.700 [2024-11-18 03:11:44.051197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86579 ] 00:11:40.700 [2024-11-18 03:11:44.213130] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.700 [2024-11-18 03:11:44.264603] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.961 [2024-11-18 03:11:44.307359] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.961 [2024-11-18 03:11:44.307407] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:41.532 03:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:41.532 03:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:41.532 03:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:41.532 03:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:41.532 03:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.532 03:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.532 BaseBdev1_malloc 00:11:41.532 03:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.532 03:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:41.532 03:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.532 03:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.532 [2024-11-18 03:11:44.942089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:41.532 [2024-11-18 03:11:44.942151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.532 [2024-11-18 03:11:44.942174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:41.532 [2024-11-18 03:11:44.942187] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.532 [2024-11-18 03:11:44.944445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.532 [2024-11-18 03:11:44.944483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:41.532 BaseBdev1 00:11:41.532 03:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.532 03:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:41.532 03:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:41.532 03:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.532 03:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.533 BaseBdev2_malloc 00:11:41.533 03:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.533 03:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:41.533 03:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.533 03:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.533 [2024-11-18 03:11:44.981529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:41.533 [2024-11-18 03:11:44.981596] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.533 [2024-11-18 03:11:44.981622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:41.533 [2024-11-18 03:11:44.981634] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.533 [2024-11-18 03:11:44.984479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.533 [2024-11-18 03:11:44.984525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:41.533 BaseBdev2 00:11:41.533 03:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.533 03:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:41.533 03:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.533 03:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.533 spare_malloc 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.533 spare_delay 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.533 [2024-11-18 03:11:45.022413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:41.533 [2024-11-18 03:11:45.022473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.533 [2024-11-18 03:11:45.022501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:41.533 [2024-11-18 03:11:45.022510] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.533 [2024-11-18 03:11:45.024802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.533 [2024-11-18 03:11:45.024836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:41.533 spare 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.533 [2024-11-18 03:11:45.034440] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.533 [2024-11-18 03:11:45.036328] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:41.533 [2024-11-18 03:11:45.036490] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:41.533 [2024-11-18 03:11:45.036510] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:41.533 [2024-11-18 03:11:45.036770] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:41.533 [2024-11-18 03:11:45.036932] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:41.533 [2024-11-18 03:11:45.036948] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:41.533 [2024-11-18 03:11:45.037091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.533 "name": "raid_bdev1", 00:11:41.533 "uuid": "45eb2d3b-7abf-4036-bf44-9daf13a4ef69", 00:11:41.533 "strip_size_kb": 0, 00:11:41.533 "state": "online", 00:11:41.533 "raid_level": "raid1", 00:11:41.533 "superblock": true, 00:11:41.533 "num_base_bdevs": 2, 00:11:41.533 "num_base_bdevs_discovered": 2, 00:11:41.533 "num_base_bdevs_operational": 2, 00:11:41.533 "base_bdevs_list": [ 00:11:41.533 { 00:11:41.533 "name": "BaseBdev1", 00:11:41.533 "uuid": "014404c0-72f4-5d22-9c32-c554d82c90c3", 00:11:41.533 "is_configured": true, 00:11:41.533 "data_offset": 2048, 00:11:41.533 "data_size": 63488 00:11:41.533 }, 00:11:41.533 { 00:11:41.533 "name": "BaseBdev2", 00:11:41.533 "uuid": "fe6142a1-d9d8-5dec-98e1-8b038e53f122", 00:11:41.533 "is_configured": true, 00:11:41.533 "data_offset": 2048, 00:11:41.533 "data_size": 63488 00:11:41.533 } 00:11:41.533 ] 00:11:41.533 }' 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.533 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.104 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:42.104 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:42.104 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.104 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.104 [2024-11-18 03:11:45.477941] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:42.104 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.104 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:42.104 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:42.104 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.104 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.104 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.104 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.104 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:42.104 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:42.104 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:42.104 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:42.104 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:42.104 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:42.104 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:42.104 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:42.104 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:42.104 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:42.104 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:42.104 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:42.104 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:42.104 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:42.364 [2024-11-18 03:11:45.761222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:42.364 /dev/nbd0 00:11:42.364 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:42.364 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:42.364 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:42.364 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:42.364 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:42.364 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:42.364 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:42.364 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:42.364 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:42.364 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:42.364 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:42.364 1+0 records in 00:11:42.364 1+0 records out 00:11:42.364 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366428 s, 11.2 MB/s 00:11:42.364 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:42.364 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:42.364 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:42.364 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:42.364 03:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:42.364 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:42.364 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:42.364 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:42.364 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:42.364 03:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:11:46.565 63488+0 records in 00:11:46.565 63488+0 records out 00:11:46.565 32505856 bytes (33 MB, 31 MiB) copied, 3.93092 s, 8.3 MB/s 00:11:46.565 03:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:46.565 03:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:46.565 03:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:46.565 03:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:46.565 03:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:46.565 03:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:46.565 03:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:46.565 [2024-11-18 03:11:49.961435] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.565 03:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:46.565 03:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:46.565 03:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:46.565 03:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:46.565 03:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:46.566 03:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:46.566 03:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:46.566 03:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:46.566 03:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:46.566 03:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.566 03:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.566 [2024-11-18 03:11:49.997477] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:46.566 03:11:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.566 03:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:46.566 03:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.566 03:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.566 03:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.566 03:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.566 03:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:46.566 03:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.566 03:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.566 03:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.566 03:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.566 03:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.566 03:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.566 03:11:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.566 03:11:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.566 03:11:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.566 03:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.566 "name": "raid_bdev1", 00:11:46.566 "uuid": "45eb2d3b-7abf-4036-bf44-9daf13a4ef69", 00:11:46.566 "strip_size_kb": 0, 00:11:46.566 "state": "online", 00:11:46.566 "raid_level": "raid1", 00:11:46.566 "superblock": true, 00:11:46.566 "num_base_bdevs": 2, 00:11:46.566 "num_base_bdevs_discovered": 1, 00:11:46.566 "num_base_bdevs_operational": 1, 00:11:46.566 "base_bdevs_list": [ 00:11:46.566 { 00:11:46.566 "name": null, 00:11:46.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.566 "is_configured": false, 00:11:46.566 "data_offset": 0, 00:11:46.566 "data_size": 63488 00:11:46.566 }, 00:11:46.566 { 00:11:46.566 "name": "BaseBdev2", 00:11:46.566 "uuid": "fe6142a1-d9d8-5dec-98e1-8b038e53f122", 00:11:46.566 "is_configured": true, 00:11:46.566 "data_offset": 2048, 00:11:46.566 "data_size": 63488 00:11:46.566 } 00:11:46.566 ] 00:11:46.566 }' 00:11:46.566 03:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.566 03:11:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.826 03:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:46.826 03:11:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.826 03:11:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.826 [2024-11-18 03:11:50.376859] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:46.826 [2024-11-18 03:11:50.381185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca31c0 00:11:46.826 03:11:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.826 [2024-11-18 03:11:50.383295] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:46.826 03:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:48.210 "name": "raid_bdev1", 00:11:48.210 "uuid": "45eb2d3b-7abf-4036-bf44-9daf13a4ef69", 00:11:48.210 "strip_size_kb": 0, 00:11:48.210 "state": "online", 00:11:48.210 "raid_level": "raid1", 00:11:48.210 "superblock": true, 00:11:48.210 "num_base_bdevs": 2, 00:11:48.210 "num_base_bdevs_discovered": 2, 00:11:48.210 "num_base_bdevs_operational": 2, 00:11:48.210 "process": { 00:11:48.210 "type": "rebuild", 00:11:48.210 "target": "spare", 00:11:48.210 "progress": { 00:11:48.210 "blocks": 20480, 00:11:48.210 "percent": 32 00:11:48.210 } 00:11:48.210 }, 00:11:48.210 "base_bdevs_list": [ 00:11:48.210 { 00:11:48.210 "name": "spare", 00:11:48.210 "uuid": "b514e03e-4ea5-5c33-af79-298cec9a37ea", 00:11:48.210 "is_configured": true, 00:11:48.210 "data_offset": 2048, 00:11:48.210 "data_size": 63488 00:11:48.210 }, 00:11:48.210 { 00:11:48.210 "name": "BaseBdev2", 00:11:48.210 "uuid": "fe6142a1-d9d8-5dec-98e1-8b038e53f122", 00:11:48.210 "is_configured": true, 00:11:48.210 "data_offset": 2048, 00:11:48.210 "data_size": 63488 00:11:48.210 } 00:11:48.210 ] 00:11:48.210 }' 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.210 [2024-11-18 03:11:51.556177] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:48.210 [2024-11-18 03:11:51.588103] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:48.210 [2024-11-18 03:11:51.588161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.210 [2024-11-18 03:11:51.588179] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:48.210 [2024-11-18 03:11:51.588185] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.210 "name": "raid_bdev1", 00:11:48.210 "uuid": "45eb2d3b-7abf-4036-bf44-9daf13a4ef69", 00:11:48.210 "strip_size_kb": 0, 00:11:48.210 "state": "online", 00:11:48.210 "raid_level": "raid1", 00:11:48.210 "superblock": true, 00:11:48.210 "num_base_bdevs": 2, 00:11:48.210 "num_base_bdevs_discovered": 1, 00:11:48.210 "num_base_bdevs_operational": 1, 00:11:48.210 "base_bdevs_list": [ 00:11:48.210 { 00:11:48.210 "name": null, 00:11:48.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.210 "is_configured": false, 00:11:48.210 "data_offset": 0, 00:11:48.210 "data_size": 63488 00:11:48.210 }, 00:11:48.210 { 00:11:48.210 "name": "BaseBdev2", 00:11:48.210 "uuid": "fe6142a1-d9d8-5dec-98e1-8b038e53f122", 00:11:48.210 "is_configured": true, 00:11:48.210 "data_offset": 2048, 00:11:48.210 "data_size": 63488 00:11:48.210 } 00:11:48.210 ] 00:11:48.210 }' 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.210 03:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.471 03:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:48.471 03:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:48.471 03:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:48.471 03:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:48.471 03:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:48.471 03:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.471 03:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.471 03:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.471 03:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.731 03:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.731 03:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:48.731 "name": "raid_bdev1", 00:11:48.731 "uuid": "45eb2d3b-7abf-4036-bf44-9daf13a4ef69", 00:11:48.731 "strip_size_kb": 0, 00:11:48.731 "state": "online", 00:11:48.731 "raid_level": "raid1", 00:11:48.731 "superblock": true, 00:11:48.731 "num_base_bdevs": 2, 00:11:48.731 "num_base_bdevs_discovered": 1, 00:11:48.731 "num_base_bdevs_operational": 1, 00:11:48.731 "base_bdevs_list": [ 00:11:48.731 { 00:11:48.731 "name": null, 00:11:48.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.731 "is_configured": false, 00:11:48.731 "data_offset": 0, 00:11:48.731 "data_size": 63488 00:11:48.731 }, 00:11:48.731 { 00:11:48.731 "name": "BaseBdev2", 00:11:48.731 "uuid": "fe6142a1-d9d8-5dec-98e1-8b038e53f122", 00:11:48.731 "is_configured": true, 00:11:48.731 "data_offset": 2048, 00:11:48.731 "data_size": 63488 00:11:48.731 } 00:11:48.731 ] 00:11:48.731 }' 00:11:48.731 03:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:48.731 03:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:48.731 03:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:48.731 03:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:48.731 03:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:48.731 03:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.731 03:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.731 [2024-11-18 03:11:52.175942] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:48.731 [2024-11-18 03:11:52.180260] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3290 00:11:48.731 03:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.731 03:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:48.731 [2024-11-18 03:11:52.182208] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:49.672 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:49.672 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:49.672 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:49.672 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:49.672 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:49.672 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.672 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.672 03:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.672 03:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.672 03:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.672 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:49.672 "name": "raid_bdev1", 00:11:49.672 "uuid": "45eb2d3b-7abf-4036-bf44-9daf13a4ef69", 00:11:49.672 "strip_size_kb": 0, 00:11:49.672 "state": "online", 00:11:49.672 "raid_level": "raid1", 00:11:49.672 "superblock": true, 00:11:49.672 "num_base_bdevs": 2, 00:11:49.672 "num_base_bdevs_discovered": 2, 00:11:49.672 "num_base_bdevs_operational": 2, 00:11:49.672 "process": { 00:11:49.672 "type": "rebuild", 00:11:49.672 "target": "spare", 00:11:49.672 "progress": { 00:11:49.672 "blocks": 20480, 00:11:49.672 "percent": 32 00:11:49.672 } 00:11:49.672 }, 00:11:49.672 "base_bdevs_list": [ 00:11:49.672 { 00:11:49.672 "name": "spare", 00:11:49.672 "uuid": "b514e03e-4ea5-5c33-af79-298cec9a37ea", 00:11:49.672 "is_configured": true, 00:11:49.672 "data_offset": 2048, 00:11:49.672 "data_size": 63488 00:11:49.672 }, 00:11:49.672 { 00:11:49.672 "name": "BaseBdev2", 00:11:49.672 "uuid": "fe6142a1-d9d8-5dec-98e1-8b038e53f122", 00:11:49.672 "is_configured": true, 00:11:49.672 "data_offset": 2048, 00:11:49.672 "data_size": 63488 00:11:49.672 } 00:11:49.672 ] 00:11:49.672 }' 00:11:49.672 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:49.933 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:49.933 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:49.933 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:49.933 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:49.933 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:49.933 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:49.933 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:49.933 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:49.933 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:49.933 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=307 00:11:49.933 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:49.933 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:49.933 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:49.933 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:49.933 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:49.933 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:49.933 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.933 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.933 03:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.933 03:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.933 03:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.933 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:49.933 "name": "raid_bdev1", 00:11:49.933 "uuid": "45eb2d3b-7abf-4036-bf44-9daf13a4ef69", 00:11:49.933 "strip_size_kb": 0, 00:11:49.933 "state": "online", 00:11:49.933 "raid_level": "raid1", 00:11:49.933 "superblock": true, 00:11:49.933 "num_base_bdevs": 2, 00:11:49.933 "num_base_bdevs_discovered": 2, 00:11:49.933 "num_base_bdevs_operational": 2, 00:11:49.933 "process": { 00:11:49.933 "type": "rebuild", 00:11:49.933 "target": "spare", 00:11:49.933 "progress": { 00:11:49.933 "blocks": 22528, 00:11:49.933 "percent": 35 00:11:49.933 } 00:11:49.933 }, 00:11:49.933 "base_bdevs_list": [ 00:11:49.933 { 00:11:49.933 "name": "spare", 00:11:49.933 "uuid": "b514e03e-4ea5-5c33-af79-298cec9a37ea", 00:11:49.933 "is_configured": true, 00:11:49.933 "data_offset": 2048, 00:11:49.933 "data_size": 63488 00:11:49.933 }, 00:11:49.933 { 00:11:49.933 "name": "BaseBdev2", 00:11:49.933 "uuid": "fe6142a1-d9d8-5dec-98e1-8b038e53f122", 00:11:49.933 "is_configured": true, 00:11:49.933 "data_offset": 2048, 00:11:49.933 "data_size": 63488 00:11:49.933 } 00:11:49.933 ] 00:11:49.933 }' 00:11:49.933 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:49.933 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:49.933 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:49.933 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:49.933 03:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:51.315 03:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:51.315 03:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:51.315 03:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:51.315 03:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:51.315 03:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:51.315 03:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:51.315 03:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.315 03:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.315 03:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.315 03:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.315 03:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.315 03:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:51.315 "name": "raid_bdev1", 00:11:51.316 "uuid": "45eb2d3b-7abf-4036-bf44-9daf13a4ef69", 00:11:51.316 "strip_size_kb": 0, 00:11:51.316 "state": "online", 00:11:51.316 "raid_level": "raid1", 00:11:51.316 "superblock": true, 00:11:51.316 "num_base_bdevs": 2, 00:11:51.316 "num_base_bdevs_discovered": 2, 00:11:51.316 "num_base_bdevs_operational": 2, 00:11:51.316 "process": { 00:11:51.316 "type": "rebuild", 00:11:51.316 "target": "spare", 00:11:51.316 "progress": { 00:11:51.316 "blocks": 45056, 00:11:51.316 "percent": 70 00:11:51.316 } 00:11:51.316 }, 00:11:51.316 "base_bdevs_list": [ 00:11:51.316 { 00:11:51.316 "name": "spare", 00:11:51.316 "uuid": "b514e03e-4ea5-5c33-af79-298cec9a37ea", 00:11:51.316 "is_configured": true, 00:11:51.316 "data_offset": 2048, 00:11:51.316 "data_size": 63488 00:11:51.316 }, 00:11:51.316 { 00:11:51.316 "name": "BaseBdev2", 00:11:51.316 "uuid": "fe6142a1-d9d8-5dec-98e1-8b038e53f122", 00:11:51.316 "is_configured": true, 00:11:51.316 "data_offset": 2048, 00:11:51.316 "data_size": 63488 00:11:51.316 } 00:11:51.316 ] 00:11:51.316 }' 00:11:51.316 03:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:51.316 03:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:51.316 03:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:51.316 03:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:51.316 03:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:51.887 [2024-11-18 03:11:55.293498] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:51.887 [2024-11-18 03:11:55.293608] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:51.887 [2024-11-18 03:11:55.293725] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.147 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:52.147 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:52.147 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:52.147 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:52.147 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:52.147 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:52.147 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.147 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.147 03:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.147 03:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.147 03:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.147 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:52.147 "name": "raid_bdev1", 00:11:52.147 "uuid": "45eb2d3b-7abf-4036-bf44-9daf13a4ef69", 00:11:52.147 "strip_size_kb": 0, 00:11:52.147 "state": "online", 00:11:52.147 "raid_level": "raid1", 00:11:52.147 "superblock": true, 00:11:52.147 "num_base_bdevs": 2, 00:11:52.147 "num_base_bdevs_discovered": 2, 00:11:52.147 "num_base_bdevs_operational": 2, 00:11:52.147 "base_bdevs_list": [ 00:11:52.147 { 00:11:52.147 "name": "spare", 00:11:52.147 "uuid": "b514e03e-4ea5-5c33-af79-298cec9a37ea", 00:11:52.147 "is_configured": true, 00:11:52.147 "data_offset": 2048, 00:11:52.147 "data_size": 63488 00:11:52.147 }, 00:11:52.147 { 00:11:52.147 "name": "BaseBdev2", 00:11:52.147 "uuid": "fe6142a1-d9d8-5dec-98e1-8b038e53f122", 00:11:52.147 "is_configured": true, 00:11:52.147 "data_offset": 2048, 00:11:52.147 "data_size": 63488 00:11:52.147 } 00:11:52.147 ] 00:11:52.147 }' 00:11:52.147 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:52.147 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:52.408 "name": "raid_bdev1", 00:11:52.408 "uuid": "45eb2d3b-7abf-4036-bf44-9daf13a4ef69", 00:11:52.408 "strip_size_kb": 0, 00:11:52.408 "state": "online", 00:11:52.408 "raid_level": "raid1", 00:11:52.408 "superblock": true, 00:11:52.408 "num_base_bdevs": 2, 00:11:52.408 "num_base_bdevs_discovered": 2, 00:11:52.408 "num_base_bdevs_operational": 2, 00:11:52.408 "base_bdevs_list": [ 00:11:52.408 { 00:11:52.408 "name": "spare", 00:11:52.408 "uuid": "b514e03e-4ea5-5c33-af79-298cec9a37ea", 00:11:52.408 "is_configured": true, 00:11:52.408 "data_offset": 2048, 00:11:52.408 "data_size": 63488 00:11:52.408 }, 00:11:52.408 { 00:11:52.408 "name": "BaseBdev2", 00:11:52.408 "uuid": "fe6142a1-d9d8-5dec-98e1-8b038e53f122", 00:11:52.408 "is_configured": true, 00:11:52.408 "data_offset": 2048, 00:11:52.408 "data_size": 63488 00:11:52.408 } 00:11:52.408 ] 00:11:52.408 }' 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.408 "name": "raid_bdev1", 00:11:52.408 "uuid": "45eb2d3b-7abf-4036-bf44-9daf13a4ef69", 00:11:52.408 "strip_size_kb": 0, 00:11:52.408 "state": "online", 00:11:52.408 "raid_level": "raid1", 00:11:52.408 "superblock": true, 00:11:52.408 "num_base_bdevs": 2, 00:11:52.408 "num_base_bdevs_discovered": 2, 00:11:52.408 "num_base_bdevs_operational": 2, 00:11:52.408 "base_bdevs_list": [ 00:11:52.408 { 00:11:52.408 "name": "spare", 00:11:52.408 "uuid": "b514e03e-4ea5-5c33-af79-298cec9a37ea", 00:11:52.408 "is_configured": true, 00:11:52.408 "data_offset": 2048, 00:11:52.408 "data_size": 63488 00:11:52.408 }, 00:11:52.408 { 00:11:52.408 "name": "BaseBdev2", 00:11:52.408 "uuid": "fe6142a1-d9d8-5dec-98e1-8b038e53f122", 00:11:52.408 "is_configured": true, 00:11:52.408 "data_offset": 2048, 00:11:52.408 "data_size": 63488 00:11:52.408 } 00:11:52.408 ] 00:11:52.408 }' 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.408 03:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.978 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:52.979 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.979 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.979 [2024-11-18 03:11:56.328627] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:52.979 [2024-11-18 03:11:56.328663] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:52.979 [2024-11-18 03:11:56.328758] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:52.979 [2024-11-18 03:11:56.328831] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:52.979 [2024-11-18 03:11:56.328848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:52.979 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.979 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.979 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.979 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:11:52.979 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.979 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.979 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:52.979 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:52.979 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:52.979 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:52.979 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:52.979 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:52.979 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:52.979 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:52.979 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:52.979 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:52.979 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:52.979 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:52.979 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:53.256 /dev/nbd0 00:11:53.256 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:53.256 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:53.256 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:53.256 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:53.256 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:53.256 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:53.256 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:53.256 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:53.256 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:53.256 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:53.256 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:53.256 1+0 records in 00:11:53.256 1+0 records out 00:11:53.256 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217974 s, 18.8 MB/s 00:11:53.256 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.256 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:53.256 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.256 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:53.256 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:53.256 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:53.256 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:53.256 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:53.530 /dev/nbd1 00:11:53.530 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:53.530 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:53.530 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:53.530 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:53.530 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:53.530 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:53.530 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:53.530 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:53.530 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:53.530 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:53.530 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:53.530 1+0 records in 00:11:53.530 1+0 records out 00:11:53.530 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435766 s, 9.4 MB/s 00:11:53.530 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.530 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:53.530 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.530 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:53.530 03:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:53.530 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:53.530 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:53.530 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:53.530 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:53.530 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:53.530 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:53.530 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:53.530 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:53.530 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:53.530 03:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:53.791 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:53.791 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:53.791 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:53.791 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:53.791 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:53.791 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:53.791 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:53.791 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:53.791 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:53.791 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.051 [2024-11-18 03:11:57.461246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:54.051 [2024-11-18 03:11:57.461307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.051 [2024-11-18 03:11:57.461326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:54.051 [2024-11-18 03:11:57.461339] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.051 [2024-11-18 03:11:57.463654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.051 [2024-11-18 03:11:57.463696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:54.051 [2024-11-18 03:11:57.463783] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:54.051 [2024-11-18 03:11:57.463833] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:54.051 [2024-11-18 03:11:57.463973] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:54.051 spare 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.051 [2024-11-18 03:11:57.563916] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:11:54.051 [2024-11-18 03:11:57.563965] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:54.051 [2024-11-18 03:11:57.564330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1940 00:11:54.051 [2024-11-18 03:11:57.564493] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:11:54.051 [2024-11-18 03:11:57.564511] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:11:54.051 [2024-11-18 03:11:57.564652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.051 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.051 "name": "raid_bdev1", 00:11:54.051 "uuid": "45eb2d3b-7abf-4036-bf44-9daf13a4ef69", 00:11:54.051 "strip_size_kb": 0, 00:11:54.051 "state": "online", 00:11:54.051 "raid_level": "raid1", 00:11:54.051 "superblock": true, 00:11:54.051 "num_base_bdevs": 2, 00:11:54.051 "num_base_bdevs_discovered": 2, 00:11:54.051 "num_base_bdevs_operational": 2, 00:11:54.051 "base_bdevs_list": [ 00:11:54.051 { 00:11:54.051 "name": "spare", 00:11:54.051 "uuid": "b514e03e-4ea5-5c33-af79-298cec9a37ea", 00:11:54.051 "is_configured": true, 00:11:54.051 "data_offset": 2048, 00:11:54.051 "data_size": 63488 00:11:54.051 }, 00:11:54.051 { 00:11:54.051 "name": "BaseBdev2", 00:11:54.052 "uuid": "fe6142a1-d9d8-5dec-98e1-8b038e53f122", 00:11:54.052 "is_configured": true, 00:11:54.052 "data_offset": 2048, 00:11:54.052 "data_size": 63488 00:11:54.052 } 00:11:54.052 ] 00:11:54.052 }' 00:11:54.052 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.052 03:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.622 03:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:54.622 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:54.622 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:54.622 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:54.622 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:54.622 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.622 03:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.622 03:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.622 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.622 03:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.622 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:54.622 "name": "raid_bdev1", 00:11:54.622 "uuid": "45eb2d3b-7abf-4036-bf44-9daf13a4ef69", 00:11:54.622 "strip_size_kb": 0, 00:11:54.622 "state": "online", 00:11:54.622 "raid_level": "raid1", 00:11:54.622 "superblock": true, 00:11:54.622 "num_base_bdevs": 2, 00:11:54.622 "num_base_bdevs_discovered": 2, 00:11:54.622 "num_base_bdevs_operational": 2, 00:11:54.622 "base_bdevs_list": [ 00:11:54.622 { 00:11:54.622 "name": "spare", 00:11:54.622 "uuid": "b514e03e-4ea5-5c33-af79-298cec9a37ea", 00:11:54.622 "is_configured": true, 00:11:54.622 "data_offset": 2048, 00:11:54.622 "data_size": 63488 00:11:54.622 }, 00:11:54.622 { 00:11:54.622 "name": "BaseBdev2", 00:11:54.623 "uuid": "fe6142a1-d9d8-5dec-98e1-8b038e53f122", 00:11:54.623 "is_configured": true, 00:11:54.623 "data_offset": 2048, 00:11:54.623 "data_size": 63488 00:11:54.623 } 00:11:54.623 ] 00:11:54.623 }' 00:11:54.623 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:54.623 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:54.623 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:54.623 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:54.623 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:54.623 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.623 03:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.623 03:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.623 03:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.623 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:54.623 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:54.623 03:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.623 03:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.623 [2024-11-18 03:11:58.180105] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:54.623 03:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.623 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:54.623 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.623 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.623 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.623 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.623 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:54.623 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.623 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.623 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.623 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.623 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.623 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.623 03:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.623 03:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.884 03:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.884 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.884 "name": "raid_bdev1", 00:11:54.884 "uuid": "45eb2d3b-7abf-4036-bf44-9daf13a4ef69", 00:11:54.884 "strip_size_kb": 0, 00:11:54.884 "state": "online", 00:11:54.884 "raid_level": "raid1", 00:11:54.884 "superblock": true, 00:11:54.884 "num_base_bdevs": 2, 00:11:54.884 "num_base_bdevs_discovered": 1, 00:11:54.884 "num_base_bdevs_operational": 1, 00:11:54.884 "base_bdevs_list": [ 00:11:54.884 { 00:11:54.884 "name": null, 00:11:54.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.884 "is_configured": false, 00:11:54.885 "data_offset": 0, 00:11:54.885 "data_size": 63488 00:11:54.885 }, 00:11:54.885 { 00:11:54.885 "name": "BaseBdev2", 00:11:54.885 "uuid": "fe6142a1-d9d8-5dec-98e1-8b038e53f122", 00:11:54.885 "is_configured": true, 00:11:54.885 "data_offset": 2048, 00:11:54.885 "data_size": 63488 00:11:54.885 } 00:11:54.885 ] 00:11:54.885 }' 00:11:54.885 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.885 03:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.145 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:55.145 03:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.145 03:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.145 [2024-11-18 03:11:58.643314] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:55.145 [2024-11-18 03:11:58.643525] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:55.145 [2024-11-18 03:11:58.643550] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:55.145 [2024-11-18 03:11:58.643604] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:55.145 [2024-11-18 03:11:58.647689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1a10 00:11:55.145 03:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.145 03:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:55.145 [2024-11-18 03:11:58.649734] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:56.087 03:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:56.087 03:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:56.087 03:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:56.087 03:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:56.087 03:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:56.087 03:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.088 03:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.088 03:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.088 03:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.349 03:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.349 03:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:56.349 "name": "raid_bdev1", 00:11:56.349 "uuid": "45eb2d3b-7abf-4036-bf44-9daf13a4ef69", 00:11:56.349 "strip_size_kb": 0, 00:11:56.349 "state": "online", 00:11:56.349 "raid_level": "raid1", 00:11:56.349 "superblock": true, 00:11:56.349 "num_base_bdevs": 2, 00:11:56.349 "num_base_bdevs_discovered": 2, 00:11:56.349 "num_base_bdevs_operational": 2, 00:11:56.349 "process": { 00:11:56.349 "type": "rebuild", 00:11:56.349 "target": "spare", 00:11:56.349 "progress": { 00:11:56.349 "blocks": 20480, 00:11:56.349 "percent": 32 00:11:56.349 } 00:11:56.349 }, 00:11:56.349 "base_bdevs_list": [ 00:11:56.349 { 00:11:56.349 "name": "spare", 00:11:56.349 "uuid": "b514e03e-4ea5-5c33-af79-298cec9a37ea", 00:11:56.349 "is_configured": true, 00:11:56.349 "data_offset": 2048, 00:11:56.349 "data_size": 63488 00:11:56.349 }, 00:11:56.349 { 00:11:56.349 "name": "BaseBdev2", 00:11:56.349 "uuid": "fe6142a1-d9d8-5dec-98e1-8b038e53f122", 00:11:56.349 "is_configured": true, 00:11:56.349 "data_offset": 2048, 00:11:56.349 "data_size": 63488 00:11:56.349 } 00:11:56.349 ] 00:11:56.349 }' 00:11:56.349 03:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:56.349 03:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:56.349 03:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:56.349 03:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:56.349 03:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:56.350 03:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.350 03:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.350 [2024-11-18 03:11:59.798999] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:56.350 [2024-11-18 03:11:59.854817] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:56.350 [2024-11-18 03:11:59.854914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.350 [2024-11-18 03:11:59.854932] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:56.350 [2024-11-18 03:11:59.854939] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:56.350 03:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.350 03:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:56.350 03:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.350 03:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.350 03:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.350 03:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.350 03:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:56.350 03:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.350 03:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.350 03:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.350 03:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.350 03:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.350 03:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.350 03:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.350 03:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.350 03:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.350 03:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.350 "name": "raid_bdev1", 00:11:56.350 "uuid": "45eb2d3b-7abf-4036-bf44-9daf13a4ef69", 00:11:56.350 "strip_size_kb": 0, 00:11:56.350 "state": "online", 00:11:56.350 "raid_level": "raid1", 00:11:56.350 "superblock": true, 00:11:56.350 "num_base_bdevs": 2, 00:11:56.350 "num_base_bdevs_discovered": 1, 00:11:56.350 "num_base_bdevs_operational": 1, 00:11:56.350 "base_bdevs_list": [ 00:11:56.350 { 00:11:56.350 "name": null, 00:11:56.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.350 "is_configured": false, 00:11:56.350 "data_offset": 0, 00:11:56.350 "data_size": 63488 00:11:56.350 }, 00:11:56.350 { 00:11:56.350 "name": "BaseBdev2", 00:11:56.350 "uuid": "fe6142a1-d9d8-5dec-98e1-8b038e53f122", 00:11:56.350 "is_configured": true, 00:11:56.350 "data_offset": 2048, 00:11:56.350 "data_size": 63488 00:11:56.350 } 00:11:56.350 ] 00:11:56.350 }' 00:11:56.350 03:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.350 03:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.920 03:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:56.920 03:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.920 03:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.920 [2024-11-18 03:12:00.330848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:56.921 [2024-11-18 03:12:00.330929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.921 [2024-11-18 03:12:00.330971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:56.921 [2024-11-18 03:12:00.330985] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.921 [2024-11-18 03:12:00.331477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.921 [2024-11-18 03:12:00.331510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:56.921 [2024-11-18 03:12:00.331609] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:56.921 [2024-11-18 03:12:00.331628] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:56.921 [2024-11-18 03:12:00.331646] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:56.921 [2024-11-18 03:12:00.331678] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:56.921 [2024-11-18 03:12:00.335876] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:11:56.921 spare 00:11:56.921 03:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.921 03:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:56.921 [2024-11-18 03:12:00.337976] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:57.860 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:57.860 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:57.860 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:57.860 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:57.860 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:57.860 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.860 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.860 03:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.860 03:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.860 03:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.860 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:57.860 "name": "raid_bdev1", 00:11:57.860 "uuid": "45eb2d3b-7abf-4036-bf44-9daf13a4ef69", 00:11:57.860 "strip_size_kb": 0, 00:11:57.860 "state": "online", 00:11:57.860 "raid_level": "raid1", 00:11:57.860 "superblock": true, 00:11:57.860 "num_base_bdevs": 2, 00:11:57.860 "num_base_bdevs_discovered": 2, 00:11:57.860 "num_base_bdevs_operational": 2, 00:11:57.860 "process": { 00:11:57.860 "type": "rebuild", 00:11:57.860 "target": "spare", 00:11:57.860 "progress": { 00:11:57.860 "blocks": 20480, 00:11:57.860 "percent": 32 00:11:57.860 } 00:11:57.860 }, 00:11:57.860 "base_bdevs_list": [ 00:11:57.860 { 00:11:57.860 "name": "spare", 00:11:57.860 "uuid": "b514e03e-4ea5-5c33-af79-298cec9a37ea", 00:11:57.860 "is_configured": true, 00:11:57.860 "data_offset": 2048, 00:11:57.860 "data_size": 63488 00:11:57.860 }, 00:11:57.860 { 00:11:57.860 "name": "BaseBdev2", 00:11:57.860 "uuid": "fe6142a1-d9d8-5dec-98e1-8b038e53f122", 00:11:57.860 "is_configured": true, 00:11:57.860 "data_offset": 2048, 00:11:57.860 "data_size": 63488 00:11:57.860 } 00:11:57.860 ] 00:11:57.860 }' 00:11:57.860 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:58.121 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:58.121 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:58.121 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:58.121 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:58.121 03:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.121 03:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.121 [2024-11-18 03:12:01.494895] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:58.121 [2024-11-18 03:12:01.542715] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:58.121 [2024-11-18 03:12:01.542816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.121 [2024-11-18 03:12:01.542833] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:58.121 [2024-11-18 03:12:01.542844] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:58.121 03:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.121 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:58.121 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.121 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.121 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.121 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.121 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:58.121 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.121 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.121 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.121 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.121 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.121 03:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.121 03:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.121 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.121 03:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.121 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.121 "name": "raid_bdev1", 00:11:58.121 "uuid": "45eb2d3b-7abf-4036-bf44-9daf13a4ef69", 00:11:58.121 "strip_size_kb": 0, 00:11:58.121 "state": "online", 00:11:58.121 "raid_level": "raid1", 00:11:58.121 "superblock": true, 00:11:58.121 "num_base_bdevs": 2, 00:11:58.121 "num_base_bdevs_discovered": 1, 00:11:58.121 "num_base_bdevs_operational": 1, 00:11:58.121 "base_bdevs_list": [ 00:11:58.121 { 00:11:58.121 "name": null, 00:11:58.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.121 "is_configured": false, 00:11:58.121 "data_offset": 0, 00:11:58.121 "data_size": 63488 00:11:58.121 }, 00:11:58.121 { 00:11:58.121 "name": "BaseBdev2", 00:11:58.121 "uuid": "fe6142a1-d9d8-5dec-98e1-8b038e53f122", 00:11:58.121 "is_configured": true, 00:11:58.121 "data_offset": 2048, 00:11:58.121 "data_size": 63488 00:11:58.121 } 00:11:58.121 ] 00:11:58.121 }' 00:11:58.121 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.121 03:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.690 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:58.690 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:58.690 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:58.690 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:58.690 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:58.690 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.690 03:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.690 03:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.690 03:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.690 03:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.690 03:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:58.690 "name": "raid_bdev1", 00:11:58.690 "uuid": "45eb2d3b-7abf-4036-bf44-9daf13a4ef69", 00:11:58.690 "strip_size_kb": 0, 00:11:58.690 "state": "online", 00:11:58.690 "raid_level": "raid1", 00:11:58.690 "superblock": true, 00:11:58.690 "num_base_bdevs": 2, 00:11:58.690 "num_base_bdevs_discovered": 1, 00:11:58.690 "num_base_bdevs_operational": 1, 00:11:58.690 "base_bdevs_list": [ 00:11:58.690 { 00:11:58.690 "name": null, 00:11:58.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.690 "is_configured": false, 00:11:58.690 "data_offset": 0, 00:11:58.690 "data_size": 63488 00:11:58.690 }, 00:11:58.690 { 00:11:58.690 "name": "BaseBdev2", 00:11:58.690 "uuid": "fe6142a1-d9d8-5dec-98e1-8b038e53f122", 00:11:58.690 "is_configured": true, 00:11:58.690 "data_offset": 2048, 00:11:58.690 "data_size": 63488 00:11:58.690 } 00:11:58.690 ] 00:11:58.690 }' 00:11:58.690 03:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:58.690 03:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:58.690 03:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:58.690 03:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:58.690 03:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:58.690 03:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.690 03:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.690 03:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.690 03:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:58.690 03:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.690 03:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.690 [2024-11-18 03:12:02.134443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:58.690 [2024-11-18 03:12:02.134525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.690 [2024-11-18 03:12:02.134547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:58.690 [2024-11-18 03:12:02.134558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.690 [2024-11-18 03:12:02.134991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.690 [2024-11-18 03:12:02.135020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:58.691 [2024-11-18 03:12:02.135100] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:58.691 [2024-11-18 03:12:02.135119] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:58.691 [2024-11-18 03:12:02.135127] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:58.691 [2024-11-18 03:12:02.135140] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:58.691 BaseBdev1 00:11:58.691 03:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.691 03:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:59.631 03:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:59.631 03:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.631 03:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.631 03:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.631 03:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.631 03:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:59.631 03:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.631 03:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.631 03:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.631 03:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.631 03:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.631 03:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.631 03:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.631 03:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.631 03:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.631 03:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.631 "name": "raid_bdev1", 00:11:59.631 "uuid": "45eb2d3b-7abf-4036-bf44-9daf13a4ef69", 00:11:59.631 "strip_size_kb": 0, 00:11:59.631 "state": "online", 00:11:59.631 "raid_level": "raid1", 00:11:59.631 "superblock": true, 00:11:59.631 "num_base_bdevs": 2, 00:11:59.631 "num_base_bdevs_discovered": 1, 00:11:59.631 "num_base_bdevs_operational": 1, 00:11:59.631 "base_bdevs_list": [ 00:11:59.631 { 00:11:59.631 "name": null, 00:11:59.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.631 "is_configured": false, 00:11:59.631 "data_offset": 0, 00:11:59.631 "data_size": 63488 00:11:59.631 }, 00:11:59.631 { 00:11:59.631 "name": "BaseBdev2", 00:11:59.631 "uuid": "fe6142a1-d9d8-5dec-98e1-8b038e53f122", 00:11:59.631 "is_configured": true, 00:11:59.631 "data_offset": 2048, 00:11:59.631 "data_size": 63488 00:11:59.631 } 00:11:59.631 ] 00:11:59.631 }' 00:11:59.631 03:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.631 03:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:00.203 "name": "raid_bdev1", 00:12:00.203 "uuid": "45eb2d3b-7abf-4036-bf44-9daf13a4ef69", 00:12:00.203 "strip_size_kb": 0, 00:12:00.203 "state": "online", 00:12:00.203 "raid_level": "raid1", 00:12:00.203 "superblock": true, 00:12:00.203 "num_base_bdevs": 2, 00:12:00.203 "num_base_bdevs_discovered": 1, 00:12:00.203 "num_base_bdevs_operational": 1, 00:12:00.203 "base_bdevs_list": [ 00:12:00.203 { 00:12:00.203 "name": null, 00:12:00.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.203 "is_configured": false, 00:12:00.203 "data_offset": 0, 00:12:00.203 "data_size": 63488 00:12:00.203 }, 00:12:00.203 { 00:12:00.203 "name": "BaseBdev2", 00:12:00.203 "uuid": "fe6142a1-d9d8-5dec-98e1-8b038e53f122", 00:12:00.203 "is_configured": true, 00:12:00.203 "data_offset": 2048, 00:12:00.203 "data_size": 63488 00:12:00.203 } 00:12:00.203 ] 00:12:00.203 }' 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.203 [2024-11-18 03:12:03.719781] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:00.203 [2024-11-18 03:12:03.719974] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:00.203 [2024-11-18 03:12:03.719992] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:00.203 request: 00:12:00.203 { 00:12:00.203 "base_bdev": "BaseBdev1", 00:12:00.203 "raid_bdev": "raid_bdev1", 00:12:00.203 "method": "bdev_raid_add_base_bdev", 00:12:00.203 "req_id": 1 00:12:00.203 } 00:12:00.203 Got JSON-RPC error response 00:12:00.203 response: 00:12:00.203 { 00:12:00.203 "code": -22, 00:12:00.203 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:00.203 } 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:00.203 03:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:01.585 03:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:01.585 03:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.585 03:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.585 03:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.585 03:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.585 03:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:01.585 03:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.585 03:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.585 03:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.585 03:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.585 03:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.585 03:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.585 03:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.585 03:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.585 03:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.585 03:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.585 "name": "raid_bdev1", 00:12:01.585 "uuid": "45eb2d3b-7abf-4036-bf44-9daf13a4ef69", 00:12:01.585 "strip_size_kb": 0, 00:12:01.585 "state": "online", 00:12:01.585 "raid_level": "raid1", 00:12:01.585 "superblock": true, 00:12:01.585 "num_base_bdevs": 2, 00:12:01.585 "num_base_bdevs_discovered": 1, 00:12:01.585 "num_base_bdevs_operational": 1, 00:12:01.585 "base_bdevs_list": [ 00:12:01.585 { 00:12:01.585 "name": null, 00:12:01.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.585 "is_configured": false, 00:12:01.585 "data_offset": 0, 00:12:01.585 "data_size": 63488 00:12:01.585 }, 00:12:01.585 { 00:12:01.585 "name": "BaseBdev2", 00:12:01.585 "uuid": "fe6142a1-d9d8-5dec-98e1-8b038e53f122", 00:12:01.585 "is_configured": true, 00:12:01.586 "data_offset": 2048, 00:12:01.586 "data_size": 63488 00:12:01.586 } 00:12:01.586 ] 00:12:01.586 }' 00:12:01.586 03:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.586 03:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.845 03:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:01.845 03:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:01.845 03:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:01.845 03:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:01.845 03:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:01.845 03:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.845 03:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.845 03:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.845 03:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.845 03:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.845 03:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:01.845 "name": "raid_bdev1", 00:12:01.845 "uuid": "45eb2d3b-7abf-4036-bf44-9daf13a4ef69", 00:12:01.845 "strip_size_kb": 0, 00:12:01.845 "state": "online", 00:12:01.845 "raid_level": "raid1", 00:12:01.845 "superblock": true, 00:12:01.845 "num_base_bdevs": 2, 00:12:01.845 "num_base_bdevs_discovered": 1, 00:12:01.845 "num_base_bdevs_operational": 1, 00:12:01.845 "base_bdevs_list": [ 00:12:01.845 { 00:12:01.845 "name": null, 00:12:01.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.845 "is_configured": false, 00:12:01.845 "data_offset": 0, 00:12:01.845 "data_size": 63488 00:12:01.845 }, 00:12:01.845 { 00:12:01.845 "name": "BaseBdev2", 00:12:01.845 "uuid": "fe6142a1-d9d8-5dec-98e1-8b038e53f122", 00:12:01.845 "is_configured": true, 00:12:01.845 "data_offset": 2048, 00:12:01.845 "data_size": 63488 00:12:01.845 } 00:12:01.845 ] 00:12:01.845 }' 00:12:01.845 03:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:01.845 03:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:01.845 03:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:01.845 03:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:01.845 03:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 86579 00:12:01.845 03:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 86579 ']' 00:12:01.845 03:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 86579 00:12:01.845 03:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:01.845 03:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:01.845 03:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86579 00:12:01.845 killing process with pid 86579 00:12:01.845 Received shutdown signal, test time was about 60.000000 seconds 00:12:01.845 00:12:01.845 Latency(us) 00:12:01.845 [2024-11-18T03:12:05.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:01.845 [2024-11-18T03:12:05.422Z] =================================================================================================================== 00:12:01.845 [2024-11-18T03:12:05.422Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:01.845 03:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:01.845 03:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:01.845 03:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86579' 00:12:01.845 03:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 86579 00:12:01.845 [2024-11-18 03:12:05.350590] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:01.845 [2024-11-18 03:12:05.350726] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:01.845 03:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 86579 00:12:01.845 [2024-11-18 03:12:05.350779] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:01.845 [2024-11-18 03:12:05.350790] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:12:01.845 [2024-11-18 03:12:05.383279] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:02.106 03:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:02.106 00:12:02.106 real 0m21.666s 00:12:02.106 user 0m26.791s 00:12:02.106 sys 0m3.614s 00:12:02.106 03:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:02.106 03:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.106 ************************************ 00:12:02.106 END TEST raid_rebuild_test_sb 00:12:02.106 ************************************ 00:12:02.366 03:12:05 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:02.366 03:12:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:02.366 03:12:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:02.366 03:12:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:02.366 ************************************ 00:12:02.366 START TEST raid_rebuild_test_io 00:12:02.366 ************************************ 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87289 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87289 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 87289 ']' 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:02.366 03:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:02.366 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:02.366 Zero copy mechanism will not be used. 00:12:02.366 [2024-11-18 03:12:05.791695] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:02.366 [2024-11-18 03:12:05.791821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87289 ] 00:12:02.366 [2024-11-18 03:12:05.936552] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.626 [2024-11-18 03:12:05.988622] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.626 [2024-11-18 03:12:06.032365] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.626 [2024-11-18 03:12:06.032404] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.205 BaseBdev1_malloc 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.205 [2024-11-18 03:12:06.691498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:03.205 [2024-11-18 03:12:06.691570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.205 [2024-11-18 03:12:06.691604] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:03.205 [2024-11-18 03:12:06.691627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.205 [2024-11-18 03:12:06.694093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.205 [2024-11-18 03:12:06.694129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:03.205 BaseBdev1 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.205 BaseBdev2_malloc 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.205 [2024-11-18 03:12:06.728208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:03.205 [2024-11-18 03:12:06.728287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.205 [2024-11-18 03:12:06.728310] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:03.205 [2024-11-18 03:12:06.728319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.205 [2024-11-18 03:12:06.730558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.205 [2024-11-18 03:12:06.730607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:03.205 BaseBdev2 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.205 spare_malloc 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.205 spare_delay 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.205 [2024-11-18 03:12:06.769164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:03.205 [2024-11-18 03:12:06.769236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.205 [2024-11-18 03:12:06.769265] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:03.205 [2024-11-18 03:12:06.769274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.205 [2024-11-18 03:12:06.771700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.205 [2024-11-18 03:12:06.771741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:03.205 spare 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.205 03:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:03.477 03:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.477 03:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.477 [2024-11-18 03:12:06.781184] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:03.477 [2024-11-18 03:12:06.783278] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:03.477 [2024-11-18 03:12:06.783399] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:03.477 [2024-11-18 03:12:06.783414] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:03.477 [2024-11-18 03:12:06.783724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:12:03.477 [2024-11-18 03:12:06.783874] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:03.477 [2024-11-18 03:12:06.783893] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:03.477 [2024-11-18 03:12:06.784071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.477 03:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.477 03:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:03.477 03:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.477 03:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.477 03:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.477 03:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.477 03:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:03.477 03:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.477 03:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.477 03:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.477 03:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.477 03:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.477 03:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.477 03:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.477 03:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.477 03:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.477 03:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.477 "name": "raid_bdev1", 00:12:03.477 "uuid": "e8b3b05a-50ef-4218-90e0-b5c7df4d820f", 00:12:03.477 "strip_size_kb": 0, 00:12:03.477 "state": "online", 00:12:03.477 "raid_level": "raid1", 00:12:03.477 "superblock": false, 00:12:03.477 "num_base_bdevs": 2, 00:12:03.477 "num_base_bdevs_discovered": 2, 00:12:03.477 "num_base_bdevs_operational": 2, 00:12:03.477 "base_bdevs_list": [ 00:12:03.477 { 00:12:03.477 "name": "BaseBdev1", 00:12:03.477 "uuid": "418f4ab0-05be-58b3-bbcc-3ce658a50bcf", 00:12:03.477 "is_configured": true, 00:12:03.477 "data_offset": 0, 00:12:03.477 "data_size": 65536 00:12:03.477 }, 00:12:03.477 { 00:12:03.477 "name": "BaseBdev2", 00:12:03.477 "uuid": "12943f84-9ccc-5870-8f64-435fea7eefce", 00:12:03.477 "is_configured": true, 00:12:03.477 "data_offset": 0, 00:12:03.477 "data_size": 65536 00:12:03.477 } 00:12:03.477 ] 00:12:03.477 }' 00:12:03.477 03:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.477 03:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.737 03:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:03.737 03:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:03.737 03:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.737 03:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.737 [2024-11-18 03:12:07.232719] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:03.737 03:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.737 03:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:03.737 03:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.737 03:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:03.737 03:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.737 03:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.737 03:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.737 03:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:03.737 03:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:03.737 03:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:03.737 03:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:03.737 03:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.737 03:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.998 [2024-11-18 03:12:07.316233] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:03.998 03:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.998 03:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:03.998 03:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.998 03:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.998 03:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.998 03:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.998 03:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:03.998 03:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.998 03:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.998 03:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.998 03:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.998 03:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.998 03:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.998 03:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.998 03:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.998 03:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.998 03:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.998 "name": "raid_bdev1", 00:12:03.998 "uuid": "e8b3b05a-50ef-4218-90e0-b5c7df4d820f", 00:12:03.998 "strip_size_kb": 0, 00:12:03.998 "state": "online", 00:12:03.998 "raid_level": "raid1", 00:12:03.998 "superblock": false, 00:12:03.998 "num_base_bdevs": 2, 00:12:03.998 "num_base_bdevs_discovered": 1, 00:12:03.998 "num_base_bdevs_operational": 1, 00:12:03.998 "base_bdevs_list": [ 00:12:03.998 { 00:12:03.998 "name": null, 00:12:03.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.998 "is_configured": false, 00:12:03.998 "data_offset": 0, 00:12:03.998 "data_size": 65536 00:12:03.998 }, 00:12:03.998 { 00:12:03.998 "name": "BaseBdev2", 00:12:03.998 "uuid": "12943f84-9ccc-5870-8f64-435fea7eefce", 00:12:03.998 "is_configured": true, 00:12:03.998 "data_offset": 0, 00:12:03.998 "data_size": 65536 00:12:03.998 } 00:12:03.998 ] 00:12:03.998 }' 00:12:03.998 03:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.998 03:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.998 [2024-11-18 03:12:07.390155] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:03.998 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:03.998 Zero copy mechanism will not be used. 00:12:03.998 Running I/O for 60 seconds... 00:12:04.258 03:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:04.258 03:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.258 03:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.258 [2024-11-18 03:12:07.778376] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:04.258 03:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.258 03:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:04.258 [2024-11-18 03:12:07.828997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:04.258 [2024-11-18 03:12:07.831020] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:04.518 [2024-11-18 03:12:07.933263] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:04.518 [2024-11-18 03:12:07.933842] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:04.518 [2024-11-18 03:12:08.052386] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:04.518 [2024-11-18 03:12:08.052685] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:05.089 212.00 IOPS, 636.00 MiB/s [2024-11-18T03:12:08.666Z] [2024-11-18 03:12:08.395693] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:05.089 [2024-11-18 03:12:08.396153] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:05.089 [2024-11-18 03:12:08.510752] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:05.350 [2024-11-18 03:12:08.729203] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:05.350 03:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:05.350 03:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:05.350 03:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:05.350 03:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:05.350 03:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:05.350 03:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.350 03:12:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.350 03:12:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.350 03:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.350 03:12:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.350 [2024-11-18 03:12:08.850758] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:05.350 03:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:05.350 "name": "raid_bdev1", 00:12:05.350 "uuid": "e8b3b05a-50ef-4218-90e0-b5c7df4d820f", 00:12:05.350 "strip_size_kb": 0, 00:12:05.350 "state": "online", 00:12:05.350 "raid_level": "raid1", 00:12:05.350 "superblock": false, 00:12:05.350 "num_base_bdevs": 2, 00:12:05.350 "num_base_bdevs_discovered": 2, 00:12:05.350 "num_base_bdevs_operational": 2, 00:12:05.350 "process": { 00:12:05.350 "type": "rebuild", 00:12:05.350 "target": "spare", 00:12:05.350 "progress": { 00:12:05.350 "blocks": 14336, 00:12:05.350 "percent": 21 00:12:05.350 } 00:12:05.350 }, 00:12:05.350 "base_bdevs_list": [ 00:12:05.350 { 00:12:05.350 "name": "spare", 00:12:05.350 "uuid": "d0004b20-b5eb-5a0b-bb88-63d4abd62360", 00:12:05.350 "is_configured": true, 00:12:05.350 "data_offset": 0, 00:12:05.350 "data_size": 65536 00:12:05.350 }, 00:12:05.350 { 00:12:05.350 "name": "BaseBdev2", 00:12:05.350 "uuid": "12943f84-9ccc-5870-8f64-435fea7eefce", 00:12:05.350 "is_configured": true, 00:12:05.350 "data_offset": 0, 00:12:05.350 "data_size": 65536 00:12:05.350 } 00:12:05.350 ] 00:12:05.350 }' 00:12:05.350 03:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:05.350 03:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:05.350 03:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:05.610 03:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:05.610 03:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:05.610 03:12:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.610 03:12:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.610 [2024-11-18 03:12:08.949213] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:05.610 [2024-11-18 03:12:08.968188] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:05.610 [2024-11-18 03:12:08.976137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.610 [2024-11-18 03:12:08.976187] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:05.610 [2024-11-18 03:12:08.976206] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:05.610 [2024-11-18 03:12:08.987982] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:12:05.610 03:12:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.610 03:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:05.610 03:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.610 03:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.610 03:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.610 03:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.610 03:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:05.610 03:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.610 03:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.610 03:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.610 03:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.610 03:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.610 03:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.610 03:12:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.610 03:12:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.610 03:12:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.610 03:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.610 "name": "raid_bdev1", 00:12:05.610 "uuid": "e8b3b05a-50ef-4218-90e0-b5c7df4d820f", 00:12:05.610 "strip_size_kb": 0, 00:12:05.610 "state": "online", 00:12:05.610 "raid_level": "raid1", 00:12:05.610 "superblock": false, 00:12:05.610 "num_base_bdevs": 2, 00:12:05.610 "num_base_bdevs_discovered": 1, 00:12:05.610 "num_base_bdevs_operational": 1, 00:12:05.610 "base_bdevs_list": [ 00:12:05.610 { 00:12:05.610 "name": null, 00:12:05.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.610 "is_configured": false, 00:12:05.610 "data_offset": 0, 00:12:05.610 "data_size": 65536 00:12:05.610 }, 00:12:05.610 { 00:12:05.611 "name": "BaseBdev2", 00:12:05.611 "uuid": "12943f84-9ccc-5870-8f64-435fea7eefce", 00:12:05.611 "is_configured": true, 00:12:05.611 "data_offset": 0, 00:12:05.611 "data_size": 65536 00:12:05.611 } 00:12:05.611 ] 00:12:05.611 }' 00:12:05.611 03:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.611 03:12:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.129 195.50 IOPS, 586.50 MiB/s [2024-11-18T03:12:09.706Z] 03:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:06.129 03:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:06.129 03:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:06.129 03:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:06.129 03:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:06.129 03:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.129 03:12:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.129 03:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.129 03:12:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.129 03:12:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.129 03:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:06.129 "name": "raid_bdev1", 00:12:06.129 "uuid": "e8b3b05a-50ef-4218-90e0-b5c7df4d820f", 00:12:06.129 "strip_size_kb": 0, 00:12:06.129 "state": "online", 00:12:06.129 "raid_level": "raid1", 00:12:06.129 "superblock": false, 00:12:06.129 "num_base_bdevs": 2, 00:12:06.129 "num_base_bdevs_discovered": 1, 00:12:06.129 "num_base_bdevs_operational": 1, 00:12:06.129 "base_bdevs_list": [ 00:12:06.129 { 00:12:06.129 "name": null, 00:12:06.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.129 "is_configured": false, 00:12:06.129 "data_offset": 0, 00:12:06.129 "data_size": 65536 00:12:06.129 }, 00:12:06.129 { 00:12:06.129 "name": "BaseBdev2", 00:12:06.129 "uuid": "12943f84-9ccc-5870-8f64-435fea7eefce", 00:12:06.129 "is_configured": true, 00:12:06.129 "data_offset": 0, 00:12:06.129 "data_size": 65536 00:12:06.129 } 00:12:06.129 ] 00:12:06.129 }' 00:12:06.129 03:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:06.129 03:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:06.129 03:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:06.129 03:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:06.129 03:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:06.129 03:12:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.129 03:12:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.129 [2024-11-18 03:12:09.605418] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:06.129 03:12:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.129 03:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:06.129 [2024-11-18 03:12:09.644529] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:06.129 [2024-11-18 03:12:09.646497] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:06.389 [2024-11-18 03:12:09.749107] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:06.390 [2024-11-18 03:12:09.749616] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:06.649 [2024-11-18 03:12:09.969324] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:06.649 [2024-11-18 03:12:09.969628] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:06.910 179.67 IOPS, 539.00 MiB/s [2024-11-18T03:12:10.487Z] [2024-11-18 03:12:10.453775] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:07.170 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:07.170 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:07.170 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:07.170 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:07.170 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:07.170 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.170 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.170 03:12:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.170 03:12:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.170 03:12:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.170 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:07.170 "name": "raid_bdev1", 00:12:07.170 "uuid": "e8b3b05a-50ef-4218-90e0-b5c7df4d820f", 00:12:07.170 "strip_size_kb": 0, 00:12:07.170 "state": "online", 00:12:07.170 "raid_level": "raid1", 00:12:07.170 "superblock": false, 00:12:07.170 "num_base_bdevs": 2, 00:12:07.170 "num_base_bdevs_discovered": 2, 00:12:07.170 "num_base_bdevs_operational": 2, 00:12:07.170 "process": { 00:12:07.170 "type": "rebuild", 00:12:07.170 "target": "spare", 00:12:07.170 "progress": { 00:12:07.170 "blocks": 12288, 00:12:07.170 "percent": 18 00:12:07.170 } 00:12:07.170 }, 00:12:07.170 "base_bdevs_list": [ 00:12:07.170 { 00:12:07.170 "name": "spare", 00:12:07.170 "uuid": "d0004b20-b5eb-5a0b-bb88-63d4abd62360", 00:12:07.170 "is_configured": true, 00:12:07.170 "data_offset": 0, 00:12:07.170 "data_size": 65536 00:12:07.170 }, 00:12:07.170 { 00:12:07.170 "name": "BaseBdev2", 00:12:07.170 "uuid": "12943f84-9ccc-5870-8f64-435fea7eefce", 00:12:07.170 "is_configured": true, 00:12:07.170 "data_offset": 0, 00:12:07.170 "data_size": 65536 00:12:07.170 } 00:12:07.170 ] 00:12:07.170 }' 00:12:07.170 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:07.170 [2024-11-18 03:12:10.694775] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:07.170 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:07.170 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:07.431 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:07.431 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:07.431 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:07.431 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:07.431 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:07.431 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=324 00:12:07.431 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:07.431 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:07.431 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:07.431 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:07.431 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:07.431 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:07.431 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.431 03:12:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.431 03:12:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.431 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.431 03:12:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.431 [2024-11-18 03:12:10.809407] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:07.431 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:07.431 "name": "raid_bdev1", 00:12:07.431 "uuid": "e8b3b05a-50ef-4218-90e0-b5c7df4d820f", 00:12:07.431 "strip_size_kb": 0, 00:12:07.431 "state": "online", 00:12:07.431 "raid_level": "raid1", 00:12:07.431 "superblock": false, 00:12:07.431 "num_base_bdevs": 2, 00:12:07.431 "num_base_bdevs_discovered": 2, 00:12:07.431 "num_base_bdevs_operational": 2, 00:12:07.431 "process": { 00:12:07.431 "type": "rebuild", 00:12:07.431 "target": "spare", 00:12:07.431 "progress": { 00:12:07.431 "blocks": 14336, 00:12:07.431 "percent": 21 00:12:07.431 } 00:12:07.431 }, 00:12:07.431 "base_bdevs_list": [ 00:12:07.431 { 00:12:07.431 "name": "spare", 00:12:07.431 "uuid": "d0004b20-b5eb-5a0b-bb88-63d4abd62360", 00:12:07.431 "is_configured": true, 00:12:07.431 "data_offset": 0, 00:12:07.431 "data_size": 65536 00:12:07.431 }, 00:12:07.431 { 00:12:07.431 "name": "BaseBdev2", 00:12:07.431 "uuid": "12943f84-9ccc-5870-8f64-435fea7eefce", 00:12:07.431 "is_configured": true, 00:12:07.431 "data_offset": 0, 00:12:07.431 "data_size": 65536 00:12:07.431 } 00:12:07.431 ] 00:12:07.431 }' 00:12:07.431 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:07.431 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:07.431 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:07.431 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:07.431 03:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:07.691 [2024-11-18 03:12:11.156267] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:08.212 156.00 IOPS, 468.00 MiB/s [2024-11-18T03:12:11.789Z] [2024-11-18 03:12:11.618565] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:08.472 03:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:08.472 03:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:08.472 03:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:08.472 03:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:08.472 03:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:08.472 03:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.472 03:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.472 03:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.472 03:12:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.472 03:12:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.472 03:12:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.472 [2024-11-18 03:12:11.943558] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:08.472 03:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:08.472 "name": "raid_bdev1", 00:12:08.472 "uuid": "e8b3b05a-50ef-4218-90e0-b5c7df4d820f", 00:12:08.472 "strip_size_kb": 0, 00:12:08.472 "state": "online", 00:12:08.472 "raid_level": "raid1", 00:12:08.472 "superblock": false, 00:12:08.472 "num_base_bdevs": 2, 00:12:08.472 "num_base_bdevs_discovered": 2, 00:12:08.472 "num_base_bdevs_operational": 2, 00:12:08.472 "process": { 00:12:08.472 "type": "rebuild", 00:12:08.472 "target": "spare", 00:12:08.472 "progress": { 00:12:08.472 "blocks": 30720, 00:12:08.472 "percent": 46 00:12:08.472 } 00:12:08.472 }, 00:12:08.472 "base_bdevs_list": [ 00:12:08.472 { 00:12:08.472 "name": "spare", 00:12:08.472 "uuid": "d0004b20-b5eb-5a0b-bb88-63d4abd62360", 00:12:08.472 "is_configured": true, 00:12:08.472 "data_offset": 0, 00:12:08.472 "data_size": 65536 00:12:08.472 }, 00:12:08.472 { 00:12:08.472 "name": "BaseBdev2", 00:12:08.472 "uuid": "12943f84-9ccc-5870-8f64-435fea7eefce", 00:12:08.472 "is_configured": true, 00:12:08.472 "data_offset": 0, 00:12:08.472 "data_size": 65536 00:12:08.472 } 00:12:08.472 ] 00:12:08.472 }' 00:12:08.472 03:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:08.472 03:12:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:08.472 03:12:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:08.733 03:12:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:08.733 03:12:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:08.733 [2024-11-18 03:12:12.179684] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:08.992 133.20 IOPS, 399.60 MiB/s [2024-11-18T03:12:12.569Z] [2024-11-18 03:12:12.517981] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:09.563 03:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:09.563 03:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:09.563 03:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:09.563 03:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:09.563 03:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:09.563 03:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:09.563 03:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.563 03:12:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.563 03:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.563 03:12:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.563 03:12:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.563 03:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:09.563 "name": "raid_bdev1", 00:12:09.563 "uuid": "e8b3b05a-50ef-4218-90e0-b5c7df4d820f", 00:12:09.563 "strip_size_kb": 0, 00:12:09.563 "state": "online", 00:12:09.563 "raid_level": "raid1", 00:12:09.563 "superblock": false, 00:12:09.563 "num_base_bdevs": 2, 00:12:09.563 "num_base_bdevs_discovered": 2, 00:12:09.563 "num_base_bdevs_operational": 2, 00:12:09.563 "process": { 00:12:09.563 "type": "rebuild", 00:12:09.563 "target": "spare", 00:12:09.563 "progress": { 00:12:09.563 "blocks": 49152, 00:12:09.563 "percent": 75 00:12:09.563 } 00:12:09.563 }, 00:12:09.563 "base_bdevs_list": [ 00:12:09.563 { 00:12:09.563 "name": "spare", 00:12:09.563 "uuid": "d0004b20-b5eb-5a0b-bb88-63d4abd62360", 00:12:09.563 "is_configured": true, 00:12:09.563 "data_offset": 0, 00:12:09.563 "data_size": 65536 00:12:09.563 }, 00:12:09.563 { 00:12:09.563 "name": "BaseBdev2", 00:12:09.563 "uuid": "12943f84-9ccc-5870-8f64-435fea7eefce", 00:12:09.563 "is_configured": true, 00:12:09.563 "data_offset": 0, 00:12:09.563 "data_size": 65536 00:12:09.563 } 00:12:09.563 ] 00:12:09.563 }' 00:12:09.563 03:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:09.824 03:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:09.824 03:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:09.824 03:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:09.824 03:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:09.824 [2024-11-18 03:12:13.276384] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:12:10.083 117.17 IOPS, 351.50 MiB/s [2024-11-18T03:12:13.660Z] [2024-11-18 03:12:13.504196] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:10.653 [2024-11-18 03:12:14.053073] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:10.653 [2024-11-18 03:12:14.152838] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:10.653 [2024-11-18 03:12:14.154831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.653 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:10.653 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:10.653 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.653 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:10.653 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:10.653 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.653 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.653 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.653 03:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.653 03:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.653 03:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.913 "name": "raid_bdev1", 00:12:10.913 "uuid": "e8b3b05a-50ef-4218-90e0-b5c7df4d820f", 00:12:10.913 "strip_size_kb": 0, 00:12:10.913 "state": "online", 00:12:10.913 "raid_level": "raid1", 00:12:10.913 "superblock": false, 00:12:10.913 "num_base_bdevs": 2, 00:12:10.913 "num_base_bdevs_discovered": 2, 00:12:10.913 "num_base_bdevs_operational": 2, 00:12:10.913 "base_bdevs_list": [ 00:12:10.913 { 00:12:10.913 "name": "spare", 00:12:10.913 "uuid": "d0004b20-b5eb-5a0b-bb88-63d4abd62360", 00:12:10.913 "is_configured": true, 00:12:10.913 "data_offset": 0, 00:12:10.913 "data_size": 65536 00:12:10.913 }, 00:12:10.913 { 00:12:10.913 "name": "BaseBdev2", 00:12:10.913 "uuid": "12943f84-9ccc-5870-8f64-435fea7eefce", 00:12:10.913 "is_configured": true, 00:12:10.913 "data_offset": 0, 00:12:10.913 "data_size": 65536 00:12:10.913 } 00:12:10.913 ] 00:12:10.913 }' 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.913 "name": "raid_bdev1", 00:12:10.913 "uuid": "e8b3b05a-50ef-4218-90e0-b5c7df4d820f", 00:12:10.913 "strip_size_kb": 0, 00:12:10.913 "state": "online", 00:12:10.913 "raid_level": "raid1", 00:12:10.913 "superblock": false, 00:12:10.913 "num_base_bdevs": 2, 00:12:10.913 "num_base_bdevs_discovered": 2, 00:12:10.913 "num_base_bdevs_operational": 2, 00:12:10.913 "base_bdevs_list": [ 00:12:10.913 { 00:12:10.913 "name": "spare", 00:12:10.913 "uuid": "d0004b20-b5eb-5a0b-bb88-63d4abd62360", 00:12:10.913 "is_configured": true, 00:12:10.913 "data_offset": 0, 00:12:10.913 "data_size": 65536 00:12:10.913 }, 00:12:10.913 { 00:12:10.913 "name": "BaseBdev2", 00:12:10.913 "uuid": "12943f84-9ccc-5870-8f64-435fea7eefce", 00:12:10.913 "is_configured": true, 00:12:10.913 "data_offset": 0, 00:12:10.913 "data_size": 65536 00:12:10.913 } 00:12:10.913 ] 00:12:10.913 }' 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.913 104.29 IOPS, 312.86 MiB/s [2024-11-18T03:12:14.490Z] 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.913 03:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.173 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.173 "name": "raid_bdev1", 00:12:11.173 "uuid": "e8b3b05a-50ef-4218-90e0-b5c7df4d820f", 00:12:11.173 "strip_size_kb": 0, 00:12:11.173 "state": "online", 00:12:11.173 "raid_level": "raid1", 00:12:11.173 "superblock": false, 00:12:11.173 "num_base_bdevs": 2, 00:12:11.173 "num_base_bdevs_discovered": 2, 00:12:11.173 "num_base_bdevs_operational": 2, 00:12:11.173 "base_bdevs_list": [ 00:12:11.173 { 00:12:11.173 "name": "spare", 00:12:11.173 "uuid": "d0004b20-b5eb-5a0b-bb88-63d4abd62360", 00:12:11.173 "is_configured": true, 00:12:11.173 "data_offset": 0, 00:12:11.173 "data_size": 65536 00:12:11.173 }, 00:12:11.173 { 00:12:11.173 "name": "BaseBdev2", 00:12:11.173 "uuid": "12943f84-9ccc-5870-8f64-435fea7eefce", 00:12:11.173 "is_configured": true, 00:12:11.173 "data_offset": 0, 00:12:11.173 "data_size": 65536 00:12:11.173 } 00:12:11.173 ] 00:12:11.173 }' 00:12:11.173 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.173 03:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.433 03:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:11.433 03:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.433 03:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.433 [2024-11-18 03:12:14.909228] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:11.433 [2024-11-18 03:12:14.909281] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:11.692 00:12:11.692 Latency(us) 00:12:11.692 [2024-11-18T03:12:15.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:11.692 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:11.692 raid_bdev1 : 7.63 98.93 296.80 0.00 0.00 13343.31 307.65 108978.64 00:12:11.692 [2024-11-18T03:12:15.269Z] =================================================================================================================== 00:12:11.692 [2024-11-18T03:12:15.269Z] Total : 98.93 296.80 0.00 0.00 13343.31 307.65 108978.64 00:12:11.692 [2024-11-18 03:12:15.013088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.692 [2024-11-18 03:12:15.013136] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:11.692 [2024-11-18 03:12:15.013220] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:11.692 [2024-11-18 03:12:15.013231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:11.692 { 00:12:11.692 "results": [ 00:12:11.692 { 00:12:11.692 "job": "raid_bdev1", 00:12:11.692 "core_mask": "0x1", 00:12:11.692 "workload": "randrw", 00:12:11.692 "percentage": 50, 00:12:11.692 "status": "finished", 00:12:11.692 "queue_depth": 2, 00:12:11.692 "io_size": 3145728, 00:12:11.692 "runtime": 7.631297, 00:12:11.692 "iops": 98.93468960780848, 00:12:11.692 "mibps": 296.80406882342544, 00:12:11.692 "io_failed": 0, 00:12:11.692 "io_timeout": 0, 00:12:11.692 "avg_latency_us": 13343.314339917291, 00:12:11.692 "min_latency_us": 307.6471615720524, 00:12:11.692 "max_latency_us": 108978.64104803493 00:12:11.692 } 00:12:11.692 ], 00:12:11.692 "core_count": 1 00:12:11.692 } 00:12:11.692 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.692 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:11.692 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.692 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.692 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.692 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.692 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:11.692 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:11.692 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:11.692 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:11.692 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:11.692 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:11.692 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:11.692 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:11.692 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:11.692 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:11.692 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:11.692 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:11.692 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:11.951 /dev/nbd0 00:12:11.951 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:11.951 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:11.951 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:11.951 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:11.951 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:11.951 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:11.951 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:11.951 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:11.951 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:11.951 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:11.952 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:11.952 1+0 records in 00:12:11.952 1+0 records out 00:12:11.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321797 s, 12.7 MB/s 00:12:11.952 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.952 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:11.952 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.952 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:11.952 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:11.952 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:11.952 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:11.952 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:11.952 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:11.952 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:11.952 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:11.952 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:11.952 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:11.952 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:11.952 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:11.952 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:11.952 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:11.952 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:11.952 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:12.211 /dev/nbd1 00:12:12.211 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:12.211 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:12.211 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:12.211 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:12.211 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:12.211 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:12.211 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:12.211 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:12.211 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:12.211 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:12.211 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:12.211 1+0 records in 00:12:12.211 1+0 records out 00:12:12.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373857 s, 11.0 MB/s 00:12:12.211 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.211 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:12.211 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.211 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:12.211 03:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:12.211 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:12.211 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:12.211 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:12.211 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:12.211 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:12.211 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:12.211 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:12.211 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:12.211 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:12.211 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:12.471 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:12.471 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:12.471 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:12.471 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:12.471 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:12.471 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:12.471 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:12.471 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:12.471 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:12.471 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:12.471 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:12.471 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:12.471 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:12.471 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:12.471 03:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:12.738 03:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:12.738 03:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:12.738 03:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:12.738 03:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:12.738 03:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:12.738 03:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:12.738 03:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:12.738 03:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:12.738 03:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:12.738 03:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 87289 00:12:12.738 03:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 87289 ']' 00:12:12.738 03:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 87289 00:12:12.738 03:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:12:12.738 03:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:12.738 03:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87289 00:12:12.738 03:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:12.738 03:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:12.738 killing process with pid 87289 00:12:12.738 03:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87289' 00:12:12.738 03:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 87289 00:12:12.738 Received shutdown signal, test time was about 8.789694 seconds 00:12:12.738 00:12:12.738 Latency(us) 00:12:12.738 [2024-11-18T03:12:16.315Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:12.738 [2024-11-18T03:12:16.315Z] =================================================================================================================== 00:12:12.738 [2024-11-18T03:12:16.315Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:12.738 03:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 87289 00:12:12.738 [2024-11-18 03:12:16.165162] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:12.738 [2024-11-18 03:12:16.193332] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:13.007 00:12:13.007 real 0m10.732s 00:12:13.007 user 0m13.908s 00:12:13.007 sys 0m1.462s 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.007 ************************************ 00:12:13.007 END TEST raid_rebuild_test_io 00:12:13.007 ************************************ 00:12:13.007 03:12:16 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:12:13.007 03:12:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:13.007 03:12:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:13.007 03:12:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:13.007 ************************************ 00:12:13.007 START TEST raid_rebuild_test_sb_io 00:12:13.007 ************************************ 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87650 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87650 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 87650 ']' 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:13.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:13.007 03:12:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.267 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:13.267 Zero copy mechanism will not be used. 00:12:13.267 [2024-11-18 03:12:16.595340] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:13.267 [2024-11-18 03:12:16.595480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87650 ] 00:12:13.267 [2024-11-18 03:12:16.756576] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.267 [2024-11-18 03:12:16.808824] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.526 [2024-11-18 03:12:16.852262] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.526 [2024-11-18 03:12:16.852302] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.097 BaseBdev1_malloc 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.097 [2024-11-18 03:12:17.466916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:14.097 [2024-11-18 03:12:17.466990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.097 [2024-11-18 03:12:17.467023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:14.097 [2024-11-18 03:12:17.467043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.097 [2024-11-18 03:12:17.469227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.097 [2024-11-18 03:12:17.469260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:14.097 BaseBdev1 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.097 BaseBdev2_malloc 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.097 [2024-11-18 03:12:17.505452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:14.097 [2024-11-18 03:12:17.505512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.097 [2024-11-18 03:12:17.505537] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:14.097 [2024-11-18 03:12:17.505547] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.097 [2024-11-18 03:12:17.508097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.097 [2024-11-18 03:12:17.508136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:14.097 BaseBdev2 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.097 spare_malloc 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.097 spare_delay 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.097 [2024-11-18 03:12:17.546097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:14.097 [2024-11-18 03:12:17.546150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.097 [2024-11-18 03:12:17.546174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:14.097 [2024-11-18 03:12:17.546182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.097 [2024-11-18 03:12:17.548310] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.097 [2024-11-18 03:12:17.548342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:14.097 spare 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.097 [2024-11-18 03:12:17.558120] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:14.097 [2024-11-18 03:12:17.559995] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:14.097 [2024-11-18 03:12:17.560147] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:14.097 [2024-11-18 03:12:17.560159] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:14.097 [2024-11-18 03:12:17.560422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:12:14.097 [2024-11-18 03:12:17.560561] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:14.097 [2024-11-18 03:12:17.560579] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:14.097 [2024-11-18 03:12:17.560698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.097 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.098 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.098 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:14.098 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.098 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.098 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.098 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.098 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.098 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.098 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.098 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.098 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.098 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.098 "name": "raid_bdev1", 00:12:14.098 "uuid": "f7dc2a69-8f37-40e3-9a4a-ac42e685d04f", 00:12:14.098 "strip_size_kb": 0, 00:12:14.098 "state": "online", 00:12:14.098 "raid_level": "raid1", 00:12:14.098 "superblock": true, 00:12:14.098 "num_base_bdevs": 2, 00:12:14.098 "num_base_bdevs_discovered": 2, 00:12:14.098 "num_base_bdevs_operational": 2, 00:12:14.098 "base_bdevs_list": [ 00:12:14.098 { 00:12:14.098 "name": "BaseBdev1", 00:12:14.098 "uuid": "2a4112a4-6665-5a06-a63d-9c8e0ffb52d1", 00:12:14.098 "is_configured": true, 00:12:14.098 "data_offset": 2048, 00:12:14.098 "data_size": 63488 00:12:14.098 }, 00:12:14.098 { 00:12:14.098 "name": "BaseBdev2", 00:12:14.098 "uuid": "f2787808-02c3-5c7e-8307-77bb314d5329", 00:12:14.098 "is_configured": true, 00:12:14.098 "data_offset": 2048, 00:12:14.098 "data_size": 63488 00:12:14.098 } 00:12:14.098 ] 00:12:14.098 }' 00:12:14.098 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.098 03:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.668 [2024-11-18 03:12:18.049569] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.668 [2024-11-18 03:12:18.145095] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.668 "name": "raid_bdev1", 00:12:14.668 "uuid": "f7dc2a69-8f37-40e3-9a4a-ac42e685d04f", 00:12:14.668 "strip_size_kb": 0, 00:12:14.668 "state": "online", 00:12:14.668 "raid_level": "raid1", 00:12:14.668 "superblock": true, 00:12:14.668 "num_base_bdevs": 2, 00:12:14.668 "num_base_bdevs_discovered": 1, 00:12:14.668 "num_base_bdevs_operational": 1, 00:12:14.668 "base_bdevs_list": [ 00:12:14.668 { 00:12:14.668 "name": null, 00:12:14.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.668 "is_configured": false, 00:12:14.668 "data_offset": 0, 00:12:14.668 "data_size": 63488 00:12:14.668 }, 00:12:14.668 { 00:12:14.668 "name": "BaseBdev2", 00:12:14.668 "uuid": "f2787808-02c3-5c7e-8307-77bb314d5329", 00:12:14.668 "is_configured": true, 00:12:14.668 "data_offset": 2048, 00:12:14.668 "data_size": 63488 00:12:14.668 } 00:12:14.668 ] 00:12:14.668 }' 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.668 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.928 [2024-11-18 03:12:18.242971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:14.928 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:14.928 Zero copy mechanism will not be used. 00:12:14.928 Running I/O for 60 seconds... 00:12:15.188 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:15.188 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.188 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.188 [2024-11-18 03:12:18.567377] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:15.188 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.188 03:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:15.188 [2024-11-18 03:12:18.616204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:15.188 [2024-11-18 03:12:18.618186] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:15.188 [2024-11-18 03:12:18.742688] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:15.188 [2024-11-18 03:12:18.743240] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:15.448 [2024-11-18 03:12:18.960694] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:15.449 [2024-11-18 03:12:18.961029] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:15.709 146.00 IOPS, 438.00 MiB/s [2024-11-18T03:12:19.286Z] [2024-11-18 03:12:19.277595] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:15.709 [2024-11-18 03:12:19.278082] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:15.968 [2024-11-18 03:12:19.503439] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:15.968 [2024-11-18 03:12:19.503727] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:16.229 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:16.229 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:16.229 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:16.229 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:16.229 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:16.229 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.229 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.229 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.229 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.229 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.229 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:16.229 "name": "raid_bdev1", 00:12:16.229 "uuid": "f7dc2a69-8f37-40e3-9a4a-ac42e685d04f", 00:12:16.229 "strip_size_kb": 0, 00:12:16.229 "state": "online", 00:12:16.229 "raid_level": "raid1", 00:12:16.229 "superblock": true, 00:12:16.229 "num_base_bdevs": 2, 00:12:16.229 "num_base_bdevs_discovered": 2, 00:12:16.229 "num_base_bdevs_operational": 2, 00:12:16.229 "process": { 00:12:16.229 "type": "rebuild", 00:12:16.229 "target": "spare", 00:12:16.229 "progress": { 00:12:16.229 "blocks": 10240, 00:12:16.229 "percent": 16 00:12:16.229 } 00:12:16.229 }, 00:12:16.229 "base_bdevs_list": [ 00:12:16.229 { 00:12:16.229 "name": "spare", 00:12:16.229 "uuid": "a03f604a-eb2e-5f6c-894d-d7211621c78a", 00:12:16.229 "is_configured": true, 00:12:16.229 "data_offset": 2048, 00:12:16.229 "data_size": 63488 00:12:16.229 }, 00:12:16.229 { 00:12:16.229 "name": "BaseBdev2", 00:12:16.229 "uuid": "f2787808-02c3-5c7e-8307-77bb314d5329", 00:12:16.229 "is_configured": true, 00:12:16.229 "data_offset": 2048, 00:12:16.229 "data_size": 63488 00:12:16.229 } 00:12:16.229 ] 00:12:16.229 }' 00:12:16.230 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:16.230 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:16.230 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:16.230 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:16.230 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:16.230 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.230 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.230 [2024-11-18 03:12:19.757656] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:16.491 [2024-11-18 03:12:19.828999] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:16.491 [2024-11-18 03:12:19.942661] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:16.491 [2024-11-18 03:12:19.956399] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.491 [2024-11-18 03:12:19.956456] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:16.491 [2024-11-18 03:12:19.956489] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:16.491 [2024-11-18 03:12:19.968223] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:12:16.491 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.491 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:16.491 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.491 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.491 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.491 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.491 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:16.491 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.491 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.491 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.491 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.491 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.491 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.491 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.491 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.491 03:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.491 03:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.491 "name": "raid_bdev1", 00:12:16.491 "uuid": "f7dc2a69-8f37-40e3-9a4a-ac42e685d04f", 00:12:16.491 "strip_size_kb": 0, 00:12:16.491 "state": "online", 00:12:16.491 "raid_level": "raid1", 00:12:16.491 "superblock": true, 00:12:16.491 "num_base_bdevs": 2, 00:12:16.491 "num_base_bdevs_discovered": 1, 00:12:16.491 "num_base_bdevs_operational": 1, 00:12:16.491 "base_bdevs_list": [ 00:12:16.491 { 00:12:16.491 "name": null, 00:12:16.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.491 "is_configured": false, 00:12:16.491 "data_offset": 0, 00:12:16.491 "data_size": 63488 00:12:16.491 }, 00:12:16.491 { 00:12:16.491 "name": "BaseBdev2", 00:12:16.491 "uuid": "f2787808-02c3-5c7e-8307-77bb314d5329", 00:12:16.491 "is_configured": true, 00:12:16.491 "data_offset": 2048, 00:12:16.491 "data_size": 63488 00:12:16.491 } 00:12:16.491 ] 00:12:16.491 }' 00:12:16.491 03:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.491 03:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.012 142.00 IOPS, 426.00 MiB/s [2024-11-18T03:12:20.589Z] 03:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:17.012 03:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:17.012 03:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:17.012 03:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:17.012 03:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.012 03:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.012 03:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.012 03:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.012 03:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.012 03:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.012 03:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.012 "name": "raid_bdev1", 00:12:17.012 "uuid": "f7dc2a69-8f37-40e3-9a4a-ac42e685d04f", 00:12:17.012 "strip_size_kb": 0, 00:12:17.012 "state": "online", 00:12:17.012 "raid_level": "raid1", 00:12:17.012 "superblock": true, 00:12:17.012 "num_base_bdevs": 2, 00:12:17.012 "num_base_bdevs_discovered": 1, 00:12:17.012 "num_base_bdevs_operational": 1, 00:12:17.012 "base_bdevs_list": [ 00:12:17.012 { 00:12:17.012 "name": null, 00:12:17.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.012 "is_configured": false, 00:12:17.012 "data_offset": 0, 00:12:17.012 "data_size": 63488 00:12:17.012 }, 00:12:17.012 { 00:12:17.012 "name": "BaseBdev2", 00:12:17.012 "uuid": "f2787808-02c3-5c7e-8307-77bb314d5329", 00:12:17.012 "is_configured": true, 00:12:17.012 "data_offset": 2048, 00:12:17.012 "data_size": 63488 00:12:17.012 } 00:12:17.012 ] 00:12:17.012 }' 00:12:17.012 03:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:17.012 03:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:17.012 03:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:17.012 03:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:17.012 03:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:17.012 03:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.012 03:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.012 [2024-11-18 03:12:20.570831] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:17.272 03:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.272 03:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:17.272 [2024-11-18 03:12:20.608754] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:17.272 [2024-11-18 03:12:20.610678] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:17.272 [2024-11-18 03:12:20.734806] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:17.272 [2024-11-18 03:12:20.735394] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:17.532 [2024-11-18 03:12:20.949489] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:17.532 [2024-11-18 03:12:20.949768] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:17.792 153.67 IOPS, 461.00 MiB/s [2024-11-18T03:12:21.369Z] [2024-11-18 03:12:21.289733] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:18.052 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:18.052 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:18.052 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:18.052 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:18.052 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:18.052 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.052 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.052 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.052 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.312 [2024-11-18 03:12:21.635928] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:18.312 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.312 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:18.312 "name": "raid_bdev1", 00:12:18.312 "uuid": "f7dc2a69-8f37-40e3-9a4a-ac42e685d04f", 00:12:18.312 "strip_size_kb": 0, 00:12:18.312 "state": "online", 00:12:18.312 "raid_level": "raid1", 00:12:18.312 "superblock": true, 00:12:18.312 "num_base_bdevs": 2, 00:12:18.312 "num_base_bdevs_discovered": 2, 00:12:18.312 "num_base_bdevs_operational": 2, 00:12:18.312 "process": { 00:12:18.312 "type": "rebuild", 00:12:18.312 "target": "spare", 00:12:18.312 "progress": { 00:12:18.312 "blocks": 12288, 00:12:18.312 "percent": 19 00:12:18.312 } 00:12:18.312 }, 00:12:18.312 "base_bdevs_list": [ 00:12:18.312 { 00:12:18.312 "name": "spare", 00:12:18.312 "uuid": "a03f604a-eb2e-5f6c-894d-d7211621c78a", 00:12:18.312 "is_configured": true, 00:12:18.312 "data_offset": 2048, 00:12:18.312 "data_size": 63488 00:12:18.312 }, 00:12:18.312 { 00:12:18.312 "name": "BaseBdev2", 00:12:18.312 "uuid": "f2787808-02c3-5c7e-8307-77bb314d5329", 00:12:18.312 "is_configured": true, 00:12:18.312 "data_offset": 2048, 00:12:18.312 "data_size": 63488 00:12:18.312 } 00:12:18.312 ] 00:12:18.312 }' 00:12:18.312 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.312 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:18.312 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.312 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:18.312 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:18.312 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:18.312 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:18.312 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:18.312 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:18.312 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:18.312 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=335 00:12:18.312 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:18.312 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:18.312 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:18.312 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:18.312 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:18.312 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:18.312 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.312 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.312 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.312 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.312 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.312 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:18.312 "name": "raid_bdev1", 00:12:18.312 "uuid": "f7dc2a69-8f37-40e3-9a4a-ac42e685d04f", 00:12:18.312 "strip_size_kb": 0, 00:12:18.312 "state": "online", 00:12:18.312 "raid_level": "raid1", 00:12:18.312 "superblock": true, 00:12:18.312 "num_base_bdevs": 2, 00:12:18.312 "num_base_bdevs_discovered": 2, 00:12:18.312 "num_base_bdevs_operational": 2, 00:12:18.312 "process": { 00:12:18.312 "type": "rebuild", 00:12:18.312 "target": "spare", 00:12:18.312 "progress": { 00:12:18.312 "blocks": 14336, 00:12:18.312 "percent": 22 00:12:18.312 } 00:12:18.312 }, 00:12:18.312 "base_bdevs_list": [ 00:12:18.312 { 00:12:18.312 "name": "spare", 00:12:18.312 "uuid": "a03f604a-eb2e-5f6c-894d-d7211621c78a", 00:12:18.312 "is_configured": true, 00:12:18.312 "data_offset": 2048, 00:12:18.312 "data_size": 63488 00:12:18.312 }, 00:12:18.312 { 00:12:18.312 "name": "BaseBdev2", 00:12:18.312 "uuid": "f2787808-02c3-5c7e-8307-77bb314d5329", 00:12:18.312 "is_configured": true, 00:12:18.312 "data_offset": 2048, 00:12:18.312 "data_size": 63488 00:12:18.312 } 00:12:18.312 ] 00:12:18.312 }' 00:12:18.312 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.312 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:18.312 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.312 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:18.312 03:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:18.571 [2024-11-18 03:12:22.071609] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:18.571 [2024-11-18 03:12:22.072181] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:18.830 129.25 IOPS, 387.75 MiB/s [2024-11-18T03:12:22.407Z] [2024-11-18 03:12:22.287693] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:19.090 [2024-11-18 03:12:22.608227] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:19.351 [2024-11-18 03:12:22.727333] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:19.351 03:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:19.351 03:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:19.351 03:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:19.351 03:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:19.351 03:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:19.351 03:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:19.351 03:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.351 03:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.351 03:12:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.351 03:12:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.351 03:12:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.611 03:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.611 "name": "raid_bdev1", 00:12:19.611 "uuid": "f7dc2a69-8f37-40e3-9a4a-ac42e685d04f", 00:12:19.611 "strip_size_kb": 0, 00:12:19.611 "state": "online", 00:12:19.611 "raid_level": "raid1", 00:12:19.611 "superblock": true, 00:12:19.611 "num_base_bdevs": 2, 00:12:19.611 "num_base_bdevs_discovered": 2, 00:12:19.611 "num_base_bdevs_operational": 2, 00:12:19.611 "process": { 00:12:19.611 "type": "rebuild", 00:12:19.611 "target": "spare", 00:12:19.611 "progress": { 00:12:19.611 "blocks": 30720, 00:12:19.611 "percent": 48 00:12:19.611 } 00:12:19.611 }, 00:12:19.611 "base_bdevs_list": [ 00:12:19.611 { 00:12:19.611 "name": "spare", 00:12:19.611 "uuid": "a03f604a-eb2e-5f6c-894d-d7211621c78a", 00:12:19.611 "is_configured": true, 00:12:19.611 "data_offset": 2048, 00:12:19.611 "data_size": 63488 00:12:19.611 }, 00:12:19.611 { 00:12:19.611 "name": "BaseBdev2", 00:12:19.611 "uuid": "f2787808-02c3-5c7e-8307-77bb314d5329", 00:12:19.611 "is_configured": true, 00:12:19.611 "data_offset": 2048, 00:12:19.611 "data_size": 63488 00:12:19.611 } 00:12:19.611 ] 00:12:19.611 }' 00:12:19.611 03:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:19.611 03:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:19.611 03:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:19.611 03:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:19.611 03:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:19.871 114.40 IOPS, 343.20 MiB/s [2024-11-18T03:12:23.448Z] [2024-11-18 03:12:23.288408] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:19.871 [2024-11-18 03:12:23.416057] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:20.440 [2024-11-18 03:12:23.851309] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:20.700 03:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:20.700 03:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:20.700 03:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:20.700 03:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:20.700 03:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:20.700 03:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:20.700 03:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.700 03:12:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.700 03:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.700 03:12:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.700 03:12:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.700 03:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:20.700 "name": "raid_bdev1", 00:12:20.700 "uuid": "f7dc2a69-8f37-40e3-9a4a-ac42e685d04f", 00:12:20.700 "strip_size_kb": 0, 00:12:20.700 "state": "online", 00:12:20.700 "raid_level": "raid1", 00:12:20.700 "superblock": true, 00:12:20.700 "num_base_bdevs": 2, 00:12:20.700 "num_base_bdevs_discovered": 2, 00:12:20.700 "num_base_bdevs_operational": 2, 00:12:20.700 "process": { 00:12:20.700 "type": "rebuild", 00:12:20.700 "target": "spare", 00:12:20.700 "progress": { 00:12:20.700 "blocks": 49152, 00:12:20.700 "percent": 77 00:12:20.700 } 00:12:20.700 }, 00:12:20.700 "base_bdevs_list": [ 00:12:20.700 { 00:12:20.700 "name": "spare", 00:12:20.700 "uuid": "a03f604a-eb2e-5f6c-894d-d7211621c78a", 00:12:20.700 "is_configured": true, 00:12:20.700 "data_offset": 2048, 00:12:20.700 "data_size": 63488 00:12:20.700 }, 00:12:20.700 { 00:12:20.701 "name": "BaseBdev2", 00:12:20.701 "uuid": "f2787808-02c3-5c7e-8307-77bb314d5329", 00:12:20.701 "is_configured": true, 00:12:20.701 "data_offset": 2048, 00:12:20.701 "data_size": 63488 00:12:20.701 } 00:12:20.701 ] 00:12:20.701 }' 00:12:20.701 03:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:20.701 03:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:20.701 03:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:20.701 03:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:20.701 03:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:20.972 102.67 IOPS, 308.00 MiB/s [2024-11-18T03:12:24.549Z] [2024-11-18 03:12:24.499139] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:21.258 [2024-11-18 03:12:24.701915] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:21.259 [2024-11-18 03:12:24.805100] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:21.259 [2024-11-18 03:12:24.806810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.843 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:21.843 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:21.843 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:21.843 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:21.843 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:21.843 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:21.843 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.844 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.844 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.844 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.844 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.844 93.29 IOPS, 279.86 MiB/s [2024-11-18T03:12:25.421Z] 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.844 "name": "raid_bdev1", 00:12:21.844 "uuid": "f7dc2a69-8f37-40e3-9a4a-ac42e685d04f", 00:12:21.844 "strip_size_kb": 0, 00:12:21.844 "state": "online", 00:12:21.844 "raid_level": "raid1", 00:12:21.844 "superblock": true, 00:12:21.844 "num_base_bdevs": 2, 00:12:21.844 "num_base_bdevs_discovered": 2, 00:12:21.844 "num_base_bdevs_operational": 2, 00:12:21.844 "base_bdevs_list": [ 00:12:21.844 { 00:12:21.844 "name": "spare", 00:12:21.844 "uuid": "a03f604a-eb2e-5f6c-894d-d7211621c78a", 00:12:21.844 "is_configured": true, 00:12:21.844 "data_offset": 2048, 00:12:21.844 "data_size": 63488 00:12:21.844 }, 00:12:21.844 { 00:12:21.844 "name": "BaseBdev2", 00:12:21.844 "uuid": "f2787808-02c3-5c7e-8307-77bb314d5329", 00:12:21.844 "is_configured": true, 00:12:21.844 "data_offset": 2048, 00:12:21.844 "data_size": 63488 00:12:21.844 } 00:12:21.844 ] 00:12:21.844 }' 00:12:21.844 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:21.844 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:21.844 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:21.844 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:21.844 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:21.844 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:21.844 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:21.844 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:21.844 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:21.844 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:21.844 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.844 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.844 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.844 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.844 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.844 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.844 "name": "raid_bdev1", 00:12:21.844 "uuid": "f7dc2a69-8f37-40e3-9a4a-ac42e685d04f", 00:12:21.844 "strip_size_kb": 0, 00:12:21.844 "state": "online", 00:12:21.844 "raid_level": "raid1", 00:12:21.844 "superblock": true, 00:12:21.844 "num_base_bdevs": 2, 00:12:21.844 "num_base_bdevs_discovered": 2, 00:12:21.844 "num_base_bdevs_operational": 2, 00:12:21.844 "base_bdevs_list": [ 00:12:21.844 { 00:12:21.844 "name": "spare", 00:12:21.844 "uuid": "a03f604a-eb2e-5f6c-894d-d7211621c78a", 00:12:21.844 "is_configured": true, 00:12:21.844 "data_offset": 2048, 00:12:21.844 "data_size": 63488 00:12:21.844 }, 00:12:21.844 { 00:12:21.844 "name": "BaseBdev2", 00:12:21.844 "uuid": "f2787808-02c3-5c7e-8307-77bb314d5329", 00:12:21.844 "is_configured": true, 00:12:21.844 "data_offset": 2048, 00:12:21.844 "data_size": 63488 00:12:21.844 } 00:12:21.844 ] 00:12:21.844 }' 00:12:21.844 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:21.844 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:21.844 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:22.104 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:22.104 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:22.104 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.104 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.104 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.104 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.104 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:22.104 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.104 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.104 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.104 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.104 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.104 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.104 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.104 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.104 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.104 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.104 "name": "raid_bdev1", 00:12:22.104 "uuid": "f7dc2a69-8f37-40e3-9a4a-ac42e685d04f", 00:12:22.104 "strip_size_kb": 0, 00:12:22.104 "state": "online", 00:12:22.104 "raid_level": "raid1", 00:12:22.104 "superblock": true, 00:12:22.104 "num_base_bdevs": 2, 00:12:22.104 "num_base_bdevs_discovered": 2, 00:12:22.104 "num_base_bdevs_operational": 2, 00:12:22.104 "base_bdevs_list": [ 00:12:22.104 { 00:12:22.104 "name": "spare", 00:12:22.104 "uuid": "a03f604a-eb2e-5f6c-894d-d7211621c78a", 00:12:22.104 "is_configured": true, 00:12:22.104 "data_offset": 2048, 00:12:22.104 "data_size": 63488 00:12:22.104 }, 00:12:22.104 { 00:12:22.104 "name": "BaseBdev2", 00:12:22.104 "uuid": "f2787808-02c3-5c7e-8307-77bb314d5329", 00:12:22.104 "is_configured": true, 00:12:22.104 "data_offset": 2048, 00:12:22.104 "data_size": 63488 00:12:22.104 } 00:12:22.104 ] 00:12:22.104 }' 00:12:22.104 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.104 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.364 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:22.364 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.364 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.364 [2024-11-18 03:12:25.894700] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:22.364 [2024-11-18 03:12:25.894735] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:22.625 00:12:22.625 Latency(us) 00:12:22.625 [2024-11-18T03:12:26.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.625 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:22.625 raid_bdev1 : 7.76 87.67 263.01 0.00 0.00 16236.08 298.70 108520.75 00:12:22.625 [2024-11-18T03:12:26.202Z] =================================================================================================================== 00:12:22.625 [2024-11-18T03:12:26.202Z] Total : 87.67 263.01 0.00 0.00 16236.08 298.70 108520.75 00:12:22.625 [2024-11-18 03:12:25.990114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.625 [2024-11-18 03:12:25.990156] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:22.625 [2024-11-18 03:12:25.990231] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:22.625 [2024-11-18 03:12:25.990247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:22.625 { 00:12:22.625 "results": [ 00:12:22.625 { 00:12:22.625 "job": "raid_bdev1", 00:12:22.625 "core_mask": "0x1", 00:12:22.625 "workload": "randrw", 00:12:22.625 "percentage": 50, 00:12:22.625 "status": "finished", 00:12:22.625 "queue_depth": 2, 00:12:22.625 "io_size": 3145728, 00:12:22.625 "runtime": 7.756413, 00:12:22.625 "iops": 87.66939047727345, 00:12:22.625 "mibps": 263.00817143182036, 00:12:22.625 "io_failed": 0, 00:12:22.625 "io_timeout": 0, 00:12:22.625 "avg_latency_us": 16236.082897508348, 00:12:22.625 "min_latency_us": 298.70393013100437, 00:12:22.625 "max_latency_us": 108520.74759825328 00:12:22.625 } 00:12:22.625 ], 00:12:22.625 "core_count": 1 00:12:22.625 } 00:12:22.625 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.625 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.625 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.625 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:22.625 03:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.625 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.625 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:22.625 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:22.625 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:22.625 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:22.625 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:22.625 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:22.625 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:22.625 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:22.625 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:22.625 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:22.625 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:22.625 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:22.625 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:22.885 /dev/nbd0 00:12:22.885 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:22.885 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:22.885 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:22.885 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:22.885 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:22.885 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:22.885 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:22.885 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:22.885 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:22.885 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:22.885 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:22.885 1+0 records in 00:12:22.885 1+0 records out 00:12:22.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408071 s, 10.0 MB/s 00:12:22.885 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.885 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:22.885 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.885 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:22.885 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:22.885 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:22.885 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:22.885 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:22.885 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:22.885 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:22.885 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:22.885 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:22.885 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:22.885 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:22.885 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:22.885 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:22.885 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:22.885 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:22.885 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:23.146 /dev/nbd1 00:12:23.146 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:23.146 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:23.146 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:23.146 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:23.146 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:23.146 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:23.146 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:23.146 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:23.146 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:23.146 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:23.146 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:23.146 1+0 records in 00:12:23.146 1+0 records out 00:12:23.146 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426534 s, 9.6 MB/s 00:12:23.146 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.146 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:23.146 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.146 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:23.146 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:23.146 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:23.146 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:23.146 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:23.146 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:23.146 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:23.146 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:23.146 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:23.146 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:23.146 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.146 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:23.406 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:23.406 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:23.406 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:23.406 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.406 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.406 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:23.406 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:23.406 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.406 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:23.406 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:23.406 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:23.406 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:23.406 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:23.406 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.406 03:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.667 [2024-11-18 03:12:27.051145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:23.667 [2024-11-18 03:12:27.051198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.667 [2024-11-18 03:12:27.051216] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:23.667 [2024-11-18 03:12:27.051227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.667 [2024-11-18 03:12:27.053383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.667 [2024-11-18 03:12:27.053418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:23.667 [2024-11-18 03:12:27.053501] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:23.667 [2024-11-18 03:12:27.053543] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:23.667 [2024-11-18 03:12:27.053645] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:23.667 spare 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.667 [2024-11-18 03:12:27.153539] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:12:23.667 [2024-11-18 03:12:27.153573] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:23.667 [2024-11-18 03:12:27.153837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:12:23.667 [2024-11-18 03:12:27.153962] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:12:23.667 [2024-11-18 03:12:27.153993] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:12:23.667 [2024-11-18 03:12:27.154115] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.667 "name": "raid_bdev1", 00:12:23.667 "uuid": "f7dc2a69-8f37-40e3-9a4a-ac42e685d04f", 00:12:23.667 "strip_size_kb": 0, 00:12:23.667 "state": "online", 00:12:23.667 "raid_level": "raid1", 00:12:23.667 "superblock": true, 00:12:23.667 "num_base_bdevs": 2, 00:12:23.667 "num_base_bdevs_discovered": 2, 00:12:23.667 "num_base_bdevs_operational": 2, 00:12:23.667 "base_bdevs_list": [ 00:12:23.667 { 00:12:23.667 "name": "spare", 00:12:23.667 "uuid": "a03f604a-eb2e-5f6c-894d-d7211621c78a", 00:12:23.667 "is_configured": true, 00:12:23.667 "data_offset": 2048, 00:12:23.667 "data_size": 63488 00:12:23.667 }, 00:12:23.667 { 00:12:23.667 "name": "BaseBdev2", 00:12:23.667 "uuid": "f2787808-02c3-5c7e-8307-77bb314d5329", 00:12:23.667 "is_configured": true, 00:12:23.667 "data_offset": 2048, 00:12:23.667 "data_size": 63488 00:12:23.667 } 00:12:23.667 ] 00:12:23.667 }' 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.667 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.237 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:24.237 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:24.237 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:24.237 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:24.237 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:24.237 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.237 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.237 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:24.238 "name": "raid_bdev1", 00:12:24.238 "uuid": "f7dc2a69-8f37-40e3-9a4a-ac42e685d04f", 00:12:24.238 "strip_size_kb": 0, 00:12:24.238 "state": "online", 00:12:24.238 "raid_level": "raid1", 00:12:24.238 "superblock": true, 00:12:24.238 "num_base_bdevs": 2, 00:12:24.238 "num_base_bdevs_discovered": 2, 00:12:24.238 "num_base_bdevs_operational": 2, 00:12:24.238 "base_bdevs_list": [ 00:12:24.238 { 00:12:24.238 "name": "spare", 00:12:24.238 "uuid": "a03f604a-eb2e-5f6c-894d-d7211621c78a", 00:12:24.238 "is_configured": true, 00:12:24.238 "data_offset": 2048, 00:12:24.238 "data_size": 63488 00:12:24.238 }, 00:12:24.238 { 00:12:24.238 "name": "BaseBdev2", 00:12:24.238 "uuid": "f2787808-02c3-5c7e-8307-77bb314d5329", 00:12:24.238 "is_configured": true, 00:12:24.238 "data_offset": 2048, 00:12:24.238 "data_size": 63488 00:12:24.238 } 00:12:24.238 ] 00:12:24.238 }' 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.238 [2024-11-18 03:12:27.774075] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.238 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.498 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.498 "name": "raid_bdev1", 00:12:24.498 "uuid": "f7dc2a69-8f37-40e3-9a4a-ac42e685d04f", 00:12:24.498 "strip_size_kb": 0, 00:12:24.498 "state": "online", 00:12:24.498 "raid_level": "raid1", 00:12:24.498 "superblock": true, 00:12:24.498 "num_base_bdevs": 2, 00:12:24.498 "num_base_bdevs_discovered": 1, 00:12:24.498 "num_base_bdevs_operational": 1, 00:12:24.498 "base_bdevs_list": [ 00:12:24.498 { 00:12:24.498 "name": null, 00:12:24.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.498 "is_configured": false, 00:12:24.498 "data_offset": 0, 00:12:24.498 "data_size": 63488 00:12:24.498 }, 00:12:24.498 { 00:12:24.498 "name": "BaseBdev2", 00:12:24.498 "uuid": "f2787808-02c3-5c7e-8307-77bb314d5329", 00:12:24.498 "is_configured": true, 00:12:24.498 "data_offset": 2048, 00:12:24.498 "data_size": 63488 00:12:24.498 } 00:12:24.498 ] 00:12:24.498 }' 00:12:24.498 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.498 03:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.758 03:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:24.758 03:12:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.758 03:12:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.758 [2024-11-18 03:12:28.249327] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:24.758 [2024-11-18 03:12:28.249524] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:24.758 [2024-11-18 03:12:28.249537] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:24.758 [2024-11-18 03:12:28.249570] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:24.758 [2024-11-18 03:12:28.253984] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b000 00:12:24.758 03:12:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.758 03:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:24.758 [2024-11-18 03:12:28.255901] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:25.701 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:25.701 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.701 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:25.701 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:25.701 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.701 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.701 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.701 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.701 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.964 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.964 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.964 "name": "raid_bdev1", 00:12:25.964 "uuid": "f7dc2a69-8f37-40e3-9a4a-ac42e685d04f", 00:12:25.964 "strip_size_kb": 0, 00:12:25.964 "state": "online", 00:12:25.964 "raid_level": "raid1", 00:12:25.964 "superblock": true, 00:12:25.964 "num_base_bdevs": 2, 00:12:25.964 "num_base_bdevs_discovered": 2, 00:12:25.964 "num_base_bdevs_operational": 2, 00:12:25.964 "process": { 00:12:25.964 "type": "rebuild", 00:12:25.964 "target": "spare", 00:12:25.964 "progress": { 00:12:25.964 "blocks": 20480, 00:12:25.964 "percent": 32 00:12:25.964 } 00:12:25.964 }, 00:12:25.964 "base_bdevs_list": [ 00:12:25.964 { 00:12:25.964 "name": "spare", 00:12:25.965 "uuid": "a03f604a-eb2e-5f6c-894d-d7211621c78a", 00:12:25.965 "is_configured": true, 00:12:25.965 "data_offset": 2048, 00:12:25.965 "data_size": 63488 00:12:25.965 }, 00:12:25.965 { 00:12:25.965 "name": "BaseBdev2", 00:12:25.965 "uuid": "f2787808-02c3-5c7e-8307-77bb314d5329", 00:12:25.965 "is_configured": true, 00:12:25.965 "data_offset": 2048, 00:12:25.965 "data_size": 63488 00:12:25.965 } 00:12:25.965 ] 00:12:25.965 }' 00:12:25.965 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.965 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:25.965 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.965 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:25.965 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:25.965 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.965 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.965 [2024-11-18 03:12:29.388404] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:25.965 [2024-11-18 03:12:29.460205] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:25.965 [2024-11-18 03:12:29.460289] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.965 [2024-11-18 03:12:29.460309] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:25.965 [2024-11-18 03:12:29.460317] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:25.965 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.965 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:25.965 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.965 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.965 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.965 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.965 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:25.965 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.965 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.965 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.965 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.965 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.965 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.965 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.965 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.965 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.965 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.965 "name": "raid_bdev1", 00:12:25.965 "uuid": "f7dc2a69-8f37-40e3-9a4a-ac42e685d04f", 00:12:25.965 "strip_size_kb": 0, 00:12:25.965 "state": "online", 00:12:25.965 "raid_level": "raid1", 00:12:25.965 "superblock": true, 00:12:25.965 "num_base_bdevs": 2, 00:12:25.965 "num_base_bdevs_discovered": 1, 00:12:25.965 "num_base_bdevs_operational": 1, 00:12:25.965 "base_bdevs_list": [ 00:12:25.965 { 00:12:25.965 "name": null, 00:12:25.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.965 "is_configured": false, 00:12:25.965 "data_offset": 0, 00:12:25.965 "data_size": 63488 00:12:25.965 }, 00:12:25.965 { 00:12:25.965 "name": "BaseBdev2", 00:12:25.965 "uuid": "f2787808-02c3-5c7e-8307-77bb314d5329", 00:12:25.965 "is_configured": true, 00:12:25.965 "data_offset": 2048, 00:12:25.965 "data_size": 63488 00:12:25.965 } 00:12:25.965 ] 00:12:25.965 }' 00:12:25.965 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.965 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.535 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:26.535 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.535 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.535 [2024-11-18 03:12:29.888278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:26.535 [2024-11-18 03:12:29.888344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.535 [2024-11-18 03:12:29.888370] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:26.535 [2024-11-18 03:12:29.888379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.535 [2024-11-18 03:12:29.888801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.535 [2024-11-18 03:12:29.888824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:26.535 [2024-11-18 03:12:29.888910] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:26.535 [2024-11-18 03:12:29.888925] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:26.535 [2024-11-18 03:12:29.888936] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:26.535 [2024-11-18 03:12:29.888953] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:26.535 spare 00:12:26.535 [2024-11-18 03:12:29.893468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:12:26.535 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.535 03:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:26.535 [2024-11-18 03:12:29.895446] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:27.475 03:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:27.475 03:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.475 03:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:27.475 03:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:27.475 03:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.475 03:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.475 03:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.475 03:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.475 03:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.475 03:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.475 03:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.475 "name": "raid_bdev1", 00:12:27.475 "uuid": "f7dc2a69-8f37-40e3-9a4a-ac42e685d04f", 00:12:27.475 "strip_size_kb": 0, 00:12:27.475 "state": "online", 00:12:27.475 "raid_level": "raid1", 00:12:27.475 "superblock": true, 00:12:27.475 "num_base_bdevs": 2, 00:12:27.475 "num_base_bdevs_discovered": 2, 00:12:27.475 "num_base_bdevs_operational": 2, 00:12:27.475 "process": { 00:12:27.475 "type": "rebuild", 00:12:27.475 "target": "spare", 00:12:27.475 "progress": { 00:12:27.475 "blocks": 20480, 00:12:27.475 "percent": 32 00:12:27.475 } 00:12:27.475 }, 00:12:27.475 "base_bdevs_list": [ 00:12:27.475 { 00:12:27.475 "name": "spare", 00:12:27.475 "uuid": "a03f604a-eb2e-5f6c-894d-d7211621c78a", 00:12:27.475 "is_configured": true, 00:12:27.475 "data_offset": 2048, 00:12:27.475 "data_size": 63488 00:12:27.475 }, 00:12:27.475 { 00:12:27.475 "name": "BaseBdev2", 00:12:27.475 "uuid": "f2787808-02c3-5c7e-8307-77bb314d5329", 00:12:27.475 "is_configured": true, 00:12:27.475 "data_offset": 2048, 00:12:27.475 "data_size": 63488 00:12:27.475 } 00:12:27.475 ] 00:12:27.475 }' 00:12:27.475 03:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.475 03:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:27.475 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.736 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:27.736 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:27.736 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.736 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.736 [2024-11-18 03:12:31.060001] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:27.736 [2024-11-18 03:12:31.099805] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:27.736 [2024-11-18 03:12:31.099875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.736 [2024-11-18 03:12:31.099889] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:27.736 [2024-11-18 03:12:31.099898] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:27.736 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.736 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:27.736 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.736 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.736 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.736 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.736 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:27.736 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.736 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.736 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.736 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.736 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.736 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.736 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.736 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.736 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.736 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.736 "name": "raid_bdev1", 00:12:27.736 "uuid": "f7dc2a69-8f37-40e3-9a4a-ac42e685d04f", 00:12:27.736 "strip_size_kb": 0, 00:12:27.736 "state": "online", 00:12:27.736 "raid_level": "raid1", 00:12:27.736 "superblock": true, 00:12:27.736 "num_base_bdevs": 2, 00:12:27.736 "num_base_bdevs_discovered": 1, 00:12:27.736 "num_base_bdevs_operational": 1, 00:12:27.736 "base_bdevs_list": [ 00:12:27.736 { 00:12:27.736 "name": null, 00:12:27.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.736 "is_configured": false, 00:12:27.736 "data_offset": 0, 00:12:27.736 "data_size": 63488 00:12:27.736 }, 00:12:27.736 { 00:12:27.736 "name": "BaseBdev2", 00:12:27.736 "uuid": "f2787808-02c3-5c7e-8307-77bb314d5329", 00:12:27.736 "is_configured": true, 00:12:27.736 "data_offset": 2048, 00:12:27.736 "data_size": 63488 00:12:27.736 } 00:12:27.736 ] 00:12:27.736 }' 00:12:27.736 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.736 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.995 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:27.996 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.996 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:27.996 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:27.996 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.996 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.996 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.996 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.996 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.996 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.256 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.256 "name": "raid_bdev1", 00:12:28.256 "uuid": "f7dc2a69-8f37-40e3-9a4a-ac42e685d04f", 00:12:28.256 "strip_size_kb": 0, 00:12:28.256 "state": "online", 00:12:28.256 "raid_level": "raid1", 00:12:28.256 "superblock": true, 00:12:28.256 "num_base_bdevs": 2, 00:12:28.256 "num_base_bdevs_discovered": 1, 00:12:28.256 "num_base_bdevs_operational": 1, 00:12:28.256 "base_bdevs_list": [ 00:12:28.256 { 00:12:28.256 "name": null, 00:12:28.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.256 "is_configured": false, 00:12:28.256 "data_offset": 0, 00:12:28.256 "data_size": 63488 00:12:28.256 }, 00:12:28.256 { 00:12:28.256 "name": "BaseBdev2", 00:12:28.256 "uuid": "f2787808-02c3-5c7e-8307-77bb314d5329", 00:12:28.256 "is_configured": true, 00:12:28.256 "data_offset": 2048, 00:12:28.256 "data_size": 63488 00:12:28.256 } 00:12:28.256 ] 00:12:28.256 }' 00:12:28.256 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.256 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:28.256 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.256 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:28.256 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:28.256 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.256 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.256 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.256 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:28.256 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.256 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.256 [2024-11-18 03:12:31.679674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:28.256 [2024-11-18 03:12:31.679735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.256 [2024-11-18 03:12:31.679771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:28.256 [2024-11-18 03:12:31.679782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.256 [2024-11-18 03:12:31.680189] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.256 [2024-11-18 03:12:31.680218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:28.256 [2024-11-18 03:12:31.680288] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:28.256 [2024-11-18 03:12:31.680306] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:28.256 [2024-11-18 03:12:31.680314] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:28.256 [2024-11-18 03:12:31.680326] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:28.256 BaseBdev1 00:12:28.256 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.256 03:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:29.197 03:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:29.197 03:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.197 03:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.197 03:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.197 03:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.197 03:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:29.197 03:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.197 03:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.197 03:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.197 03:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.197 03:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.197 03:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.197 03:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.197 03:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.197 03:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.197 03:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.197 "name": "raid_bdev1", 00:12:29.197 "uuid": "f7dc2a69-8f37-40e3-9a4a-ac42e685d04f", 00:12:29.197 "strip_size_kb": 0, 00:12:29.197 "state": "online", 00:12:29.197 "raid_level": "raid1", 00:12:29.197 "superblock": true, 00:12:29.197 "num_base_bdevs": 2, 00:12:29.197 "num_base_bdevs_discovered": 1, 00:12:29.197 "num_base_bdevs_operational": 1, 00:12:29.197 "base_bdevs_list": [ 00:12:29.197 { 00:12:29.197 "name": null, 00:12:29.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.197 "is_configured": false, 00:12:29.197 "data_offset": 0, 00:12:29.197 "data_size": 63488 00:12:29.197 }, 00:12:29.197 { 00:12:29.197 "name": "BaseBdev2", 00:12:29.197 "uuid": "f2787808-02c3-5c7e-8307-77bb314d5329", 00:12:29.197 "is_configured": true, 00:12:29.197 "data_offset": 2048, 00:12:29.197 "data_size": 63488 00:12:29.197 } 00:12:29.197 ] 00:12:29.197 }' 00:12:29.197 03:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.197 03:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.770 "name": "raid_bdev1", 00:12:29.770 "uuid": "f7dc2a69-8f37-40e3-9a4a-ac42e685d04f", 00:12:29.770 "strip_size_kb": 0, 00:12:29.770 "state": "online", 00:12:29.770 "raid_level": "raid1", 00:12:29.770 "superblock": true, 00:12:29.770 "num_base_bdevs": 2, 00:12:29.770 "num_base_bdevs_discovered": 1, 00:12:29.770 "num_base_bdevs_operational": 1, 00:12:29.770 "base_bdevs_list": [ 00:12:29.770 { 00:12:29.770 "name": null, 00:12:29.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.770 "is_configured": false, 00:12:29.770 "data_offset": 0, 00:12:29.770 "data_size": 63488 00:12:29.770 }, 00:12:29.770 { 00:12:29.770 "name": "BaseBdev2", 00:12:29.770 "uuid": "f2787808-02c3-5c7e-8307-77bb314d5329", 00:12:29.770 "is_configured": true, 00:12:29.770 "data_offset": 2048, 00:12:29.770 "data_size": 63488 00:12:29.770 } 00:12:29.770 ] 00:12:29.770 }' 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.770 [2024-11-18 03:12:33.277177] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:29.770 [2024-11-18 03:12:33.277384] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:29.770 [2024-11-18 03:12:33.277440] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:29.770 request: 00:12:29.770 { 00:12:29.770 "base_bdev": "BaseBdev1", 00:12:29.770 "raid_bdev": "raid_bdev1", 00:12:29.770 "method": "bdev_raid_add_base_bdev", 00:12:29.770 "req_id": 1 00:12:29.770 } 00:12:29.770 Got JSON-RPC error response 00:12:29.770 response: 00:12:29.770 { 00:12:29.770 "code": -22, 00:12:29.770 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:29.770 } 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:29.770 03:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:31.152 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:31.152 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.152 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.152 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.152 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.152 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:31.152 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.152 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.152 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.152 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.152 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.152 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.152 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.152 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.152 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.152 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.152 "name": "raid_bdev1", 00:12:31.152 "uuid": "f7dc2a69-8f37-40e3-9a4a-ac42e685d04f", 00:12:31.152 "strip_size_kb": 0, 00:12:31.152 "state": "online", 00:12:31.152 "raid_level": "raid1", 00:12:31.152 "superblock": true, 00:12:31.152 "num_base_bdevs": 2, 00:12:31.152 "num_base_bdevs_discovered": 1, 00:12:31.152 "num_base_bdevs_operational": 1, 00:12:31.152 "base_bdevs_list": [ 00:12:31.152 { 00:12:31.152 "name": null, 00:12:31.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.152 "is_configured": false, 00:12:31.152 "data_offset": 0, 00:12:31.152 "data_size": 63488 00:12:31.152 }, 00:12:31.152 { 00:12:31.152 "name": "BaseBdev2", 00:12:31.152 "uuid": "f2787808-02c3-5c7e-8307-77bb314d5329", 00:12:31.152 "is_configured": true, 00:12:31.152 "data_offset": 2048, 00:12:31.152 "data_size": 63488 00:12:31.152 } 00:12:31.152 ] 00:12:31.152 }' 00:12:31.152 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.152 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.152 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:31.152 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.152 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:31.152 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:31.152 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.152 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.152 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.152 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.152 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.413 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.413 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.413 "name": "raid_bdev1", 00:12:31.413 "uuid": "f7dc2a69-8f37-40e3-9a4a-ac42e685d04f", 00:12:31.413 "strip_size_kb": 0, 00:12:31.413 "state": "online", 00:12:31.413 "raid_level": "raid1", 00:12:31.413 "superblock": true, 00:12:31.413 "num_base_bdevs": 2, 00:12:31.413 "num_base_bdevs_discovered": 1, 00:12:31.413 "num_base_bdevs_operational": 1, 00:12:31.413 "base_bdevs_list": [ 00:12:31.413 { 00:12:31.413 "name": null, 00:12:31.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.413 "is_configured": false, 00:12:31.413 "data_offset": 0, 00:12:31.413 "data_size": 63488 00:12:31.413 }, 00:12:31.413 { 00:12:31.413 "name": "BaseBdev2", 00:12:31.413 "uuid": "f2787808-02c3-5c7e-8307-77bb314d5329", 00:12:31.413 "is_configured": true, 00:12:31.413 "data_offset": 2048, 00:12:31.413 "data_size": 63488 00:12:31.413 } 00:12:31.413 ] 00:12:31.413 }' 00:12:31.413 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.413 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:31.413 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.413 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:31.413 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 87650 00:12:31.413 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 87650 ']' 00:12:31.413 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 87650 00:12:31.413 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:12:31.413 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:31.413 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87650 00:12:31.413 killing process with pid 87650 00:12:31.413 Received shutdown signal, test time was about 16.674904 seconds 00:12:31.413 00:12:31.413 Latency(us) 00:12:31.413 [2024-11-18T03:12:34.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.413 [2024-11-18T03:12:34.990Z] =================================================================================================================== 00:12:31.413 [2024-11-18T03:12:34.990Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:31.413 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:31.413 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:31.413 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87650' 00:12:31.413 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 87650 00:12:31.413 [2024-11-18 03:12:34.887817] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:31.413 [2024-11-18 03:12:34.887955] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:31.413 03:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 87650 00:12:31.413 [2024-11-18 03:12:34.888024] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:31.413 [2024-11-18 03:12:34.888033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:12:31.413 [2024-11-18 03:12:34.914991] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:31.674 ************************************ 00:12:31.674 END TEST raid_rebuild_test_sb_io 00:12:31.674 ************************************ 00:12:31.674 00:12:31.674 real 0m18.645s 00:12:31.674 user 0m24.816s 00:12:31.674 sys 0m2.104s 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.674 03:12:35 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:31.674 03:12:35 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:12:31.674 03:12:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:31.674 03:12:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:31.674 03:12:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:31.674 ************************************ 00:12:31.674 START TEST raid_rebuild_test 00:12:31.674 ************************************ 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=88322 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 88322 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 88322 ']' 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:31.674 03:12:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.934 [2024-11-18 03:12:35.312341] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:31.935 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:31.935 Zero copy mechanism will not be used. 00:12:31.935 [2024-11-18 03:12:35.312562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88322 ] 00:12:31.935 [2024-11-18 03:12:35.471823] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.195 [2024-11-18 03:12:35.522815] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.195 [2024-11-18 03:12:35.565114] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.195 [2024-11-18 03:12:35.565152] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.765 BaseBdev1_malloc 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.765 [2024-11-18 03:12:36.171432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:32.765 [2024-11-18 03:12:36.171495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.765 [2024-11-18 03:12:36.171537] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:32.765 [2024-11-18 03:12:36.171551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.765 [2024-11-18 03:12:36.173827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.765 [2024-11-18 03:12:36.173869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:32.765 BaseBdev1 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.765 BaseBdev2_malloc 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.765 [2024-11-18 03:12:36.210987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:32.765 [2024-11-18 03:12:36.211053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.765 [2024-11-18 03:12:36.211080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:32.765 [2024-11-18 03:12:36.211091] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.765 [2024-11-18 03:12:36.213754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.765 [2024-11-18 03:12:36.213798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:32.765 BaseBdev2 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.765 BaseBdev3_malloc 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.765 [2024-11-18 03:12:36.239660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:32.765 [2024-11-18 03:12:36.239716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.765 [2024-11-18 03:12:36.239758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:32.765 [2024-11-18 03:12:36.239768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.765 [2024-11-18 03:12:36.241848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.765 [2024-11-18 03:12:36.241885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:32.765 BaseBdev3 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.765 BaseBdev4_malloc 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.765 [2024-11-18 03:12:36.268339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:32.765 [2024-11-18 03:12:36.268401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.765 [2024-11-18 03:12:36.268427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:32.765 [2024-11-18 03:12:36.268436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.765 [2024-11-18 03:12:36.270719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.765 [2024-11-18 03:12:36.270759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:32.765 BaseBdev4 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.765 spare_malloc 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.765 spare_delay 00:12:32.765 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.766 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:32.766 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.766 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.766 [2024-11-18 03:12:36.309262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:32.766 [2024-11-18 03:12:36.309370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.766 [2024-11-18 03:12:36.309399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:32.766 [2024-11-18 03:12:36.309408] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.766 [2024-11-18 03:12:36.311699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.766 [2024-11-18 03:12:36.311738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:32.766 spare 00:12:32.766 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.766 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:32.766 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.766 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.766 [2024-11-18 03:12:36.321319] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.766 [2024-11-18 03:12:36.323156] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:32.766 [2024-11-18 03:12:36.323228] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:32.766 [2024-11-18 03:12:36.323272] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:32.766 [2024-11-18 03:12:36.323353] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:32.766 [2024-11-18 03:12:36.323367] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:32.766 [2024-11-18 03:12:36.323630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:32.766 [2024-11-18 03:12:36.323771] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:32.766 [2024-11-18 03:12:36.323784] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:32.766 [2024-11-18 03:12:36.323905] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.766 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.766 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:32.766 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.766 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.766 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.766 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.766 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.766 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.766 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.766 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.766 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.766 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.766 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.766 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.766 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.026 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.026 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.026 "name": "raid_bdev1", 00:12:33.026 "uuid": "6fa80a55-584b-465b-808f-5eb538bd03ae", 00:12:33.026 "strip_size_kb": 0, 00:12:33.026 "state": "online", 00:12:33.026 "raid_level": "raid1", 00:12:33.026 "superblock": false, 00:12:33.026 "num_base_bdevs": 4, 00:12:33.026 "num_base_bdevs_discovered": 4, 00:12:33.026 "num_base_bdevs_operational": 4, 00:12:33.026 "base_bdevs_list": [ 00:12:33.026 { 00:12:33.026 "name": "BaseBdev1", 00:12:33.026 "uuid": "644fa0c8-c046-5712-ba7f-596711b4d54d", 00:12:33.026 "is_configured": true, 00:12:33.026 "data_offset": 0, 00:12:33.026 "data_size": 65536 00:12:33.026 }, 00:12:33.026 { 00:12:33.026 "name": "BaseBdev2", 00:12:33.026 "uuid": "c4ff3f86-c3ea-5868-8efe-fe6b609cc9ce", 00:12:33.026 "is_configured": true, 00:12:33.026 "data_offset": 0, 00:12:33.026 "data_size": 65536 00:12:33.026 }, 00:12:33.026 { 00:12:33.026 "name": "BaseBdev3", 00:12:33.026 "uuid": "07b85d3f-5e8c-5789-b5c3-798750d696d0", 00:12:33.026 "is_configured": true, 00:12:33.026 "data_offset": 0, 00:12:33.026 "data_size": 65536 00:12:33.026 }, 00:12:33.026 { 00:12:33.026 "name": "BaseBdev4", 00:12:33.026 "uuid": "359ff2c1-3035-5fb8-9dc3-60e01df1bd14", 00:12:33.026 "is_configured": true, 00:12:33.026 "data_offset": 0, 00:12:33.026 "data_size": 65536 00:12:33.026 } 00:12:33.026 ] 00:12:33.026 }' 00:12:33.026 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.026 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.286 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:33.286 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:33.286 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.286 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.286 [2024-11-18 03:12:36.776850] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:33.286 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.286 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:33.286 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.286 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.286 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.286 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:33.286 03:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.546 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:33.546 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:33.546 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:33.546 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:33.546 03:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:33.546 03:12:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:33.546 03:12:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:33.546 03:12:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:33.546 03:12:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:33.546 03:12:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:33.546 03:12:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:33.546 03:12:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:33.546 03:12:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:33.546 03:12:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:33.546 [2024-11-18 03:12:37.056123] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:33.546 /dev/nbd0 00:12:33.546 03:12:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:33.546 03:12:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:33.546 03:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:33.546 03:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:33.546 03:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:33.546 03:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:33.546 03:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:33.546 03:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:33.546 03:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:33.546 03:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:33.546 03:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.546 1+0 records in 00:12:33.546 1+0 records out 00:12:33.546 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372801 s, 11.0 MB/s 00:12:33.546 03:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.546 03:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:33.546 03:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.546 03:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:33.546 03:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:33.546 03:12:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:33.546 03:12:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:33.546 03:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:33.546 03:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:33.546 03:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:38.837 65536+0 records in 00:12:38.837 65536+0 records out 00:12:38.837 33554432 bytes (34 MB, 32 MiB) copied, 5.15256 s, 6.5 MB/s 00:12:38.837 03:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:38.837 03:12:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:38.837 03:12:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:38.837 03:12:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:38.837 03:12:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:38.837 03:12:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:38.837 03:12:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:39.098 03:12:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:39.098 [2024-11-18 03:12:42.493033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.098 03:12:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:39.098 03:12:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:39.098 03:12:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.098 03:12:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.098 03:12:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:39.098 03:12:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:39.098 03:12:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.098 03:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:39.098 03:12:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.098 03:12:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.098 [2024-11-18 03:12:42.506106] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:39.098 03:12:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.098 03:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:39.098 03:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.098 03:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.098 03:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.098 03:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.098 03:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:39.098 03:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.098 03:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.098 03:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.098 03:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.098 03:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.098 03:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.098 03:12:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.098 03:12:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.098 03:12:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.098 03:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.098 "name": "raid_bdev1", 00:12:39.098 "uuid": "6fa80a55-584b-465b-808f-5eb538bd03ae", 00:12:39.098 "strip_size_kb": 0, 00:12:39.098 "state": "online", 00:12:39.098 "raid_level": "raid1", 00:12:39.098 "superblock": false, 00:12:39.098 "num_base_bdevs": 4, 00:12:39.098 "num_base_bdevs_discovered": 3, 00:12:39.098 "num_base_bdevs_operational": 3, 00:12:39.098 "base_bdevs_list": [ 00:12:39.098 { 00:12:39.098 "name": null, 00:12:39.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.098 "is_configured": false, 00:12:39.098 "data_offset": 0, 00:12:39.098 "data_size": 65536 00:12:39.098 }, 00:12:39.098 { 00:12:39.098 "name": "BaseBdev2", 00:12:39.098 "uuid": "c4ff3f86-c3ea-5868-8efe-fe6b609cc9ce", 00:12:39.098 "is_configured": true, 00:12:39.098 "data_offset": 0, 00:12:39.098 "data_size": 65536 00:12:39.098 }, 00:12:39.098 { 00:12:39.098 "name": "BaseBdev3", 00:12:39.099 "uuid": "07b85d3f-5e8c-5789-b5c3-798750d696d0", 00:12:39.099 "is_configured": true, 00:12:39.099 "data_offset": 0, 00:12:39.099 "data_size": 65536 00:12:39.099 }, 00:12:39.099 { 00:12:39.099 "name": "BaseBdev4", 00:12:39.099 "uuid": "359ff2c1-3035-5fb8-9dc3-60e01df1bd14", 00:12:39.099 "is_configured": true, 00:12:39.099 "data_offset": 0, 00:12:39.099 "data_size": 65536 00:12:39.099 } 00:12:39.099 ] 00:12:39.099 }' 00:12:39.099 03:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.099 03:12:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.359 03:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:39.359 03:12:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.359 03:12:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.359 [2024-11-18 03:12:42.869506] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:39.359 [2024-11-18 03:12:42.872919] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:39.359 [2024-11-18 03:12:42.874931] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:39.359 03:12:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.359 03:12:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:40.741 03:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:40.741 03:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.741 03:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:40.741 03:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:40.741 03:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.741 03:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.741 03:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.741 03:12:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.741 03:12:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.741 03:12:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.741 03:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.741 "name": "raid_bdev1", 00:12:40.741 "uuid": "6fa80a55-584b-465b-808f-5eb538bd03ae", 00:12:40.741 "strip_size_kb": 0, 00:12:40.741 "state": "online", 00:12:40.741 "raid_level": "raid1", 00:12:40.741 "superblock": false, 00:12:40.741 "num_base_bdevs": 4, 00:12:40.741 "num_base_bdevs_discovered": 4, 00:12:40.741 "num_base_bdevs_operational": 4, 00:12:40.741 "process": { 00:12:40.741 "type": "rebuild", 00:12:40.741 "target": "spare", 00:12:40.742 "progress": { 00:12:40.742 "blocks": 20480, 00:12:40.742 "percent": 31 00:12:40.742 } 00:12:40.742 }, 00:12:40.742 "base_bdevs_list": [ 00:12:40.742 { 00:12:40.742 "name": "spare", 00:12:40.742 "uuid": "cf686697-d7af-52a5-a6c4-b1e65b65d2de", 00:12:40.742 "is_configured": true, 00:12:40.742 "data_offset": 0, 00:12:40.742 "data_size": 65536 00:12:40.742 }, 00:12:40.742 { 00:12:40.742 "name": "BaseBdev2", 00:12:40.742 "uuid": "c4ff3f86-c3ea-5868-8efe-fe6b609cc9ce", 00:12:40.742 "is_configured": true, 00:12:40.742 "data_offset": 0, 00:12:40.742 "data_size": 65536 00:12:40.742 }, 00:12:40.742 { 00:12:40.742 "name": "BaseBdev3", 00:12:40.742 "uuid": "07b85d3f-5e8c-5789-b5c3-798750d696d0", 00:12:40.742 "is_configured": true, 00:12:40.742 "data_offset": 0, 00:12:40.742 "data_size": 65536 00:12:40.742 }, 00:12:40.742 { 00:12:40.742 "name": "BaseBdev4", 00:12:40.742 "uuid": "359ff2c1-3035-5fb8-9dc3-60e01df1bd14", 00:12:40.742 "is_configured": true, 00:12:40.742 "data_offset": 0, 00:12:40.742 "data_size": 65536 00:12:40.742 } 00:12:40.742 ] 00:12:40.742 }' 00:12:40.742 03:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.742 03:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:40.742 03:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.742 03:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:40.742 03:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:40.742 03:12:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.742 03:12:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.742 [2024-11-18 03:12:44.041854] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:40.742 [2024-11-18 03:12:44.079884] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:40.742 [2024-11-18 03:12:44.080017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.742 [2024-11-18 03:12:44.080061] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:40.742 [2024-11-18 03:12:44.080099] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:40.742 03:12:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.742 03:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:40.742 03:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.742 03:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.742 03:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.742 03:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.742 03:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.742 03:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.742 03:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.742 03:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.742 03:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.742 03:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.742 03:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.742 03:12:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.742 03:12:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.742 03:12:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.742 03:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.742 "name": "raid_bdev1", 00:12:40.742 "uuid": "6fa80a55-584b-465b-808f-5eb538bd03ae", 00:12:40.742 "strip_size_kb": 0, 00:12:40.742 "state": "online", 00:12:40.742 "raid_level": "raid1", 00:12:40.742 "superblock": false, 00:12:40.742 "num_base_bdevs": 4, 00:12:40.742 "num_base_bdevs_discovered": 3, 00:12:40.742 "num_base_bdevs_operational": 3, 00:12:40.742 "base_bdevs_list": [ 00:12:40.742 { 00:12:40.742 "name": null, 00:12:40.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.742 "is_configured": false, 00:12:40.742 "data_offset": 0, 00:12:40.742 "data_size": 65536 00:12:40.742 }, 00:12:40.742 { 00:12:40.742 "name": "BaseBdev2", 00:12:40.742 "uuid": "c4ff3f86-c3ea-5868-8efe-fe6b609cc9ce", 00:12:40.742 "is_configured": true, 00:12:40.742 "data_offset": 0, 00:12:40.742 "data_size": 65536 00:12:40.742 }, 00:12:40.742 { 00:12:40.742 "name": "BaseBdev3", 00:12:40.742 "uuid": "07b85d3f-5e8c-5789-b5c3-798750d696d0", 00:12:40.742 "is_configured": true, 00:12:40.742 "data_offset": 0, 00:12:40.742 "data_size": 65536 00:12:40.742 }, 00:12:40.742 { 00:12:40.742 "name": "BaseBdev4", 00:12:40.742 "uuid": "359ff2c1-3035-5fb8-9dc3-60e01df1bd14", 00:12:40.742 "is_configured": true, 00:12:40.742 "data_offset": 0, 00:12:40.742 "data_size": 65536 00:12:40.742 } 00:12:40.742 ] 00:12:40.742 }' 00:12:40.742 03:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.742 03:12:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.002 03:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:41.002 03:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:41.002 03:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:41.002 03:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:41.002 03:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:41.002 03:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.002 03:12:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.002 03:12:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.002 03:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.002 03:12:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.002 03:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:41.002 "name": "raid_bdev1", 00:12:41.002 "uuid": "6fa80a55-584b-465b-808f-5eb538bd03ae", 00:12:41.002 "strip_size_kb": 0, 00:12:41.002 "state": "online", 00:12:41.002 "raid_level": "raid1", 00:12:41.002 "superblock": false, 00:12:41.002 "num_base_bdevs": 4, 00:12:41.002 "num_base_bdevs_discovered": 3, 00:12:41.002 "num_base_bdevs_operational": 3, 00:12:41.002 "base_bdevs_list": [ 00:12:41.002 { 00:12:41.002 "name": null, 00:12:41.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.002 "is_configured": false, 00:12:41.002 "data_offset": 0, 00:12:41.002 "data_size": 65536 00:12:41.002 }, 00:12:41.002 { 00:12:41.002 "name": "BaseBdev2", 00:12:41.002 "uuid": "c4ff3f86-c3ea-5868-8efe-fe6b609cc9ce", 00:12:41.002 "is_configured": true, 00:12:41.002 "data_offset": 0, 00:12:41.003 "data_size": 65536 00:12:41.003 }, 00:12:41.003 { 00:12:41.003 "name": "BaseBdev3", 00:12:41.003 "uuid": "07b85d3f-5e8c-5789-b5c3-798750d696d0", 00:12:41.003 "is_configured": true, 00:12:41.003 "data_offset": 0, 00:12:41.003 "data_size": 65536 00:12:41.003 }, 00:12:41.003 { 00:12:41.003 "name": "BaseBdev4", 00:12:41.003 "uuid": "359ff2c1-3035-5fb8-9dc3-60e01df1bd14", 00:12:41.003 "is_configured": true, 00:12:41.003 "data_offset": 0, 00:12:41.003 "data_size": 65536 00:12:41.003 } 00:12:41.003 ] 00:12:41.003 }' 00:12:41.003 03:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.263 03:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:41.263 03:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.263 03:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:41.263 03:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:41.263 03:12:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.263 03:12:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.263 [2024-11-18 03:12:44.635363] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:41.263 [2024-11-18 03:12:44.638677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:41.263 [2024-11-18 03:12:44.640771] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:41.263 03:12:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.263 03:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:42.203 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:42.203 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.203 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:42.203 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:42.203 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.203 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.203 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.203 03:12:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.203 03:12:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.203 03:12:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.203 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.203 "name": "raid_bdev1", 00:12:42.203 "uuid": "6fa80a55-584b-465b-808f-5eb538bd03ae", 00:12:42.203 "strip_size_kb": 0, 00:12:42.203 "state": "online", 00:12:42.203 "raid_level": "raid1", 00:12:42.203 "superblock": false, 00:12:42.203 "num_base_bdevs": 4, 00:12:42.203 "num_base_bdevs_discovered": 4, 00:12:42.203 "num_base_bdevs_operational": 4, 00:12:42.203 "process": { 00:12:42.203 "type": "rebuild", 00:12:42.203 "target": "spare", 00:12:42.203 "progress": { 00:12:42.203 "blocks": 20480, 00:12:42.203 "percent": 31 00:12:42.203 } 00:12:42.203 }, 00:12:42.204 "base_bdevs_list": [ 00:12:42.204 { 00:12:42.204 "name": "spare", 00:12:42.204 "uuid": "cf686697-d7af-52a5-a6c4-b1e65b65d2de", 00:12:42.204 "is_configured": true, 00:12:42.204 "data_offset": 0, 00:12:42.204 "data_size": 65536 00:12:42.204 }, 00:12:42.204 { 00:12:42.204 "name": "BaseBdev2", 00:12:42.204 "uuid": "c4ff3f86-c3ea-5868-8efe-fe6b609cc9ce", 00:12:42.204 "is_configured": true, 00:12:42.204 "data_offset": 0, 00:12:42.204 "data_size": 65536 00:12:42.204 }, 00:12:42.204 { 00:12:42.204 "name": "BaseBdev3", 00:12:42.204 "uuid": "07b85d3f-5e8c-5789-b5c3-798750d696d0", 00:12:42.204 "is_configured": true, 00:12:42.204 "data_offset": 0, 00:12:42.204 "data_size": 65536 00:12:42.204 }, 00:12:42.204 { 00:12:42.204 "name": "BaseBdev4", 00:12:42.204 "uuid": "359ff2c1-3035-5fb8-9dc3-60e01df1bd14", 00:12:42.204 "is_configured": true, 00:12:42.204 "data_offset": 0, 00:12:42.204 "data_size": 65536 00:12:42.204 } 00:12:42.204 ] 00:12:42.204 }' 00:12:42.204 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.204 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:42.204 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.464 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:42.464 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:42.464 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:42.464 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:42.464 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:42.464 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:42.464 03:12:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.464 03:12:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.464 [2024-11-18 03:12:45.807711] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:42.464 [2024-11-18 03:12:45.845060] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09ca0 00:12:42.464 03:12:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.464 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:42.464 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:42.464 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:42.464 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.464 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:42.464 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:42.464 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.464 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.464 03:12:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.464 03:12:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.464 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.464 03:12:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.464 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.464 "name": "raid_bdev1", 00:12:42.464 "uuid": "6fa80a55-584b-465b-808f-5eb538bd03ae", 00:12:42.464 "strip_size_kb": 0, 00:12:42.464 "state": "online", 00:12:42.464 "raid_level": "raid1", 00:12:42.464 "superblock": false, 00:12:42.464 "num_base_bdevs": 4, 00:12:42.464 "num_base_bdevs_discovered": 3, 00:12:42.464 "num_base_bdevs_operational": 3, 00:12:42.464 "process": { 00:12:42.464 "type": "rebuild", 00:12:42.464 "target": "spare", 00:12:42.464 "progress": { 00:12:42.464 "blocks": 24576, 00:12:42.464 "percent": 37 00:12:42.464 } 00:12:42.464 }, 00:12:42.464 "base_bdevs_list": [ 00:12:42.464 { 00:12:42.464 "name": "spare", 00:12:42.464 "uuid": "cf686697-d7af-52a5-a6c4-b1e65b65d2de", 00:12:42.464 "is_configured": true, 00:12:42.464 "data_offset": 0, 00:12:42.464 "data_size": 65536 00:12:42.464 }, 00:12:42.464 { 00:12:42.464 "name": null, 00:12:42.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.464 "is_configured": false, 00:12:42.464 "data_offset": 0, 00:12:42.464 "data_size": 65536 00:12:42.464 }, 00:12:42.464 { 00:12:42.464 "name": "BaseBdev3", 00:12:42.464 "uuid": "07b85d3f-5e8c-5789-b5c3-798750d696d0", 00:12:42.464 "is_configured": true, 00:12:42.464 "data_offset": 0, 00:12:42.465 "data_size": 65536 00:12:42.465 }, 00:12:42.465 { 00:12:42.465 "name": "BaseBdev4", 00:12:42.465 "uuid": "359ff2c1-3035-5fb8-9dc3-60e01df1bd14", 00:12:42.465 "is_configured": true, 00:12:42.465 "data_offset": 0, 00:12:42.465 "data_size": 65536 00:12:42.465 } 00:12:42.465 ] 00:12:42.465 }' 00:12:42.465 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.465 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:42.465 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.465 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:42.465 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=359 00:12:42.465 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:42.465 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:42.465 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.465 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:42.465 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:42.465 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.465 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.465 03:12:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.465 03:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.465 03:12:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.465 03:12:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.465 03:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.465 "name": "raid_bdev1", 00:12:42.465 "uuid": "6fa80a55-584b-465b-808f-5eb538bd03ae", 00:12:42.465 "strip_size_kb": 0, 00:12:42.465 "state": "online", 00:12:42.465 "raid_level": "raid1", 00:12:42.465 "superblock": false, 00:12:42.465 "num_base_bdevs": 4, 00:12:42.465 "num_base_bdevs_discovered": 3, 00:12:42.465 "num_base_bdevs_operational": 3, 00:12:42.465 "process": { 00:12:42.465 "type": "rebuild", 00:12:42.465 "target": "spare", 00:12:42.465 "progress": { 00:12:42.465 "blocks": 26624, 00:12:42.465 "percent": 40 00:12:42.465 } 00:12:42.465 }, 00:12:42.465 "base_bdevs_list": [ 00:12:42.465 { 00:12:42.465 "name": "spare", 00:12:42.465 "uuid": "cf686697-d7af-52a5-a6c4-b1e65b65d2de", 00:12:42.465 "is_configured": true, 00:12:42.465 "data_offset": 0, 00:12:42.465 "data_size": 65536 00:12:42.465 }, 00:12:42.465 { 00:12:42.465 "name": null, 00:12:42.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.465 "is_configured": false, 00:12:42.465 "data_offset": 0, 00:12:42.465 "data_size": 65536 00:12:42.465 }, 00:12:42.465 { 00:12:42.465 "name": "BaseBdev3", 00:12:42.465 "uuid": "07b85d3f-5e8c-5789-b5c3-798750d696d0", 00:12:42.465 "is_configured": true, 00:12:42.465 "data_offset": 0, 00:12:42.465 "data_size": 65536 00:12:42.465 }, 00:12:42.465 { 00:12:42.465 "name": "BaseBdev4", 00:12:42.465 "uuid": "359ff2c1-3035-5fb8-9dc3-60e01df1bd14", 00:12:42.465 "is_configured": true, 00:12:42.465 "data_offset": 0, 00:12:42.465 "data_size": 65536 00:12:42.465 } 00:12:42.465 ] 00:12:42.465 }' 00:12:42.465 03:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.725 03:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:42.725 03:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.725 03:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:42.725 03:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:43.666 03:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:43.666 03:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:43.666 03:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.666 03:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:43.666 03:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:43.666 03:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.666 03:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.666 03:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.666 03:12:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.666 03:12:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.666 03:12:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.666 03:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.666 "name": "raid_bdev1", 00:12:43.666 "uuid": "6fa80a55-584b-465b-808f-5eb538bd03ae", 00:12:43.666 "strip_size_kb": 0, 00:12:43.666 "state": "online", 00:12:43.666 "raid_level": "raid1", 00:12:43.666 "superblock": false, 00:12:43.666 "num_base_bdevs": 4, 00:12:43.666 "num_base_bdevs_discovered": 3, 00:12:43.666 "num_base_bdevs_operational": 3, 00:12:43.666 "process": { 00:12:43.666 "type": "rebuild", 00:12:43.666 "target": "spare", 00:12:43.666 "progress": { 00:12:43.666 "blocks": 49152, 00:12:43.666 "percent": 75 00:12:43.666 } 00:12:43.666 }, 00:12:43.666 "base_bdevs_list": [ 00:12:43.666 { 00:12:43.666 "name": "spare", 00:12:43.666 "uuid": "cf686697-d7af-52a5-a6c4-b1e65b65d2de", 00:12:43.666 "is_configured": true, 00:12:43.666 "data_offset": 0, 00:12:43.666 "data_size": 65536 00:12:43.666 }, 00:12:43.666 { 00:12:43.666 "name": null, 00:12:43.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.666 "is_configured": false, 00:12:43.666 "data_offset": 0, 00:12:43.666 "data_size": 65536 00:12:43.666 }, 00:12:43.666 { 00:12:43.666 "name": "BaseBdev3", 00:12:43.666 "uuid": "07b85d3f-5e8c-5789-b5c3-798750d696d0", 00:12:43.666 "is_configured": true, 00:12:43.666 "data_offset": 0, 00:12:43.666 "data_size": 65536 00:12:43.666 }, 00:12:43.666 { 00:12:43.666 "name": "BaseBdev4", 00:12:43.666 "uuid": "359ff2c1-3035-5fb8-9dc3-60e01df1bd14", 00:12:43.666 "is_configured": true, 00:12:43.666 "data_offset": 0, 00:12:43.666 "data_size": 65536 00:12:43.666 } 00:12:43.666 ] 00:12:43.666 }' 00:12:43.666 03:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.666 03:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:43.666 03:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.926 03:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:43.926 03:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:44.496 [2024-11-18 03:12:47.852661] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:44.496 [2024-11-18 03:12:47.852803] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:44.496 [2024-11-18 03:12:47.852886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.757 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:44.757 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:44.757 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.757 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:44.757 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:44.757 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.757 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.757 03:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.757 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.757 03:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.757 03:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.757 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.757 "name": "raid_bdev1", 00:12:44.757 "uuid": "6fa80a55-584b-465b-808f-5eb538bd03ae", 00:12:44.757 "strip_size_kb": 0, 00:12:44.757 "state": "online", 00:12:44.757 "raid_level": "raid1", 00:12:44.757 "superblock": false, 00:12:44.757 "num_base_bdevs": 4, 00:12:44.757 "num_base_bdevs_discovered": 3, 00:12:44.757 "num_base_bdevs_operational": 3, 00:12:44.757 "base_bdevs_list": [ 00:12:44.757 { 00:12:44.757 "name": "spare", 00:12:44.757 "uuid": "cf686697-d7af-52a5-a6c4-b1e65b65d2de", 00:12:44.757 "is_configured": true, 00:12:44.757 "data_offset": 0, 00:12:44.757 "data_size": 65536 00:12:44.757 }, 00:12:44.757 { 00:12:44.757 "name": null, 00:12:44.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.757 "is_configured": false, 00:12:44.757 "data_offset": 0, 00:12:44.757 "data_size": 65536 00:12:44.757 }, 00:12:44.757 { 00:12:44.757 "name": "BaseBdev3", 00:12:44.757 "uuid": "07b85d3f-5e8c-5789-b5c3-798750d696d0", 00:12:44.757 "is_configured": true, 00:12:44.757 "data_offset": 0, 00:12:44.757 "data_size": 65536 00:12:44.757 }, 00:12:44.757 { 00:12:44.757 "name": "BaseBdev4", 00:12:44.757 "uuid": "359ff2c1-3035-5fb8-9dc3-60e01df1bd14", 00:12:44.757 "is_configured": true, 00:12:44.757 "data_offset": 0, 00:12:44.757 "data_size": 65536 00:12:44.757 } 00:12:44.757 ] 00:12:44.757 }' 00:12:44.757 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.018 "name": "raid_bdev1", 00:12:45.018 "uuid": "6fa80a55-584b-465b-808f-5eb538bd03ae", 00:12:45.018 "strip_size_kb": 0, 00:12:45.018 "state": "online", 00:12:45.018 "raid_level": "raid1", 00:12:45.018 "superblock": false, 00:12:45.018 "num_base_bdevs": 4, 00:12:45.018 "num_base_bdevs_discovered": 3, 00:12:45.018 "num_base_bdevs_operational": 3, 00:12:45.018 "base_bdevs_list": [ 00:12:45.018 { 00:12:45.018 "name": "spare", 00:12:45.018 "uuid": "cf686697-d7af-52a5-a6c4-b1e65b65d2de", 00:12:45.018 "is_configured": true, 00:12:45.018 "data_offset": 0, 00:12:45.018 "data_size": 65536 00:12:45.018 }, 00:12:45.018 { 00:12:45.018 "name": null, 00:12:45.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.018 "is_configured": false, 00:12:45.018 "data_offset": 0, 00:12:45.018 "data_size": 65536 00:12:45.018 }, 00:12:45.018 { 00:12:45.018 "name": "BaseBdev3", 00:12:45.018 "uuid": "07b85d3f-5e8c-5789-b5c3-798750d696d0", 00:12:45.018 "is_configured": true, 00:12:45.018 "data_offset": 0, 00:12:45.018 "data_size": 65536 00:12:45.018 }, 00:12:45.018 { 00:12:45.018 "name": "BaseBdev4", 00:12:45.018 "uuid": "359ff2c1-3035-5fb8-9dc3-60e01df1bd14", 00:12:45.018 "is_configured": true, 00:12:45.018 "data_offset": 0, 00:12:45.018 "data_size": 65536 00:12:45.018 } 00:12:45.018 ] 00:12:45.018 }' 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.018 03:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.278 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.278 "name": "raid_bdev1", 00:12:45.278 "uuid": "6fa80a55-584b-465b-808f-5eb538bd03ae", 00:12:45.278 "strip_size_kb": 0, 00:12:45.278 "state": "online", 00:12:45.278 "raid_level": "raid1", 00:12:45.278 "superblock": false, 00:12:45.278 "num_base_bdevs": 4, 00:12:45.278 "num_base_bdevs_discovered": 3, 00:12:45.278 "num_base_bdevs_operational": 3, 00:12:45.278 "base_bdevs_list": [ 00:12:45.278 { 00:12:45.278 "name": "spare", 00:12:45.278 "uuid": "cf686697-d7af-52a5-a6c4-b1e65b65d2de", 00:12:45.278 "is_configured": true, 00:12:45.278 "data_offset": 0, 00:12:45.278 "data_size": 65536 00:12:45.278 }, 00:12:45.278 { 00:12:45.278 "name": null, 00:12:45.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.278 "is_configured": false, 00:12:45.278 "data_offset": 0, 00:12:45.278 "data_size": 65536 00:12:45.278 }, 00:12:45.278 { 00:12:45.278 "name": "BaseBdev3", 00:12:45.278 "uuid": "07b85d3f-5e8c-5789-b5c3-798750d696d0", 00:12:45.278 "is_configured": true, 00:12:45.278 "data_offset": 0, 00:12:45.278 "data_size": 65536 00:12:45.278 }, 00:12:45.278 { 00:12:45.278 "name": "BaseBdev4", 00:12:45.279 "uuid": "359ff2c1-3035-5fb8-9dc3-60e01df1bd14", 00:12:45.279 "is_configured": true, 00:12:45.279 "data_offset": 0, 00:12:45.279 "data_size": 65536 00:12:45.279 } 00:12:45.279 ] 00:12:45.279 }' 00:12:45.279 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.279 03:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.539 03:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:45.539 03:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.539 03:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.539 [2024-11-18 03:12:49.002860] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:45.539 [2024-11-18 03:12:49.002968] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:45.539 [2024-11-18 03:12:49.003067] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:45.539 [2024-11-18 03:12:49.003182] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:45.539 [2024-11-18 03:12:49.003198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:45.539 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.539 03:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.539 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.539 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.539 03:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:45.539 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.539 03:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:45.539 03:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:45.539 03:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:45.539 03:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:45.539 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:45.539 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:45.539 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:45.539 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:45.539 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:45.539 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:45.539 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:45.539 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:45.539 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:45.797 /dev/nbd0 00:12:45.797 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:45.797 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:45.797 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:45.797 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:45.797 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:45.797 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:45.797 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:45.798 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:45.798 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:45.798 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:45.798 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:45.798 1+0 records in 00:12:45.798 1+0 records out 00:12:45.798 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033512 s, 12.2 MB/s 00:12:45.798 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.798 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:45.798 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.798 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:45.798 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:45.798 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:45.798 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:45.798 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:46.057 /dev/nbd1 00:12:46.057 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:46.057 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:46.057 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:46.057 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:46.057 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:46.057 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:46.057 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:46.057 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:46.057 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:46.057 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:46.057 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.057 1+0 records in 00:12:46.057 1+0 records out 00:12:46.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250082 s, 16.4 MB/s 00:12:46.057 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.057 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:46.057 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.057 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:46.057 03:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:46.057 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:46.057 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:46.057 03:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:46.057 03:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:46.057 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:46.057 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:46.057 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:46.057 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:46.057 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.057 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:46.317 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:46.317 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:46.317 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:46.317 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.317 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.317 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:46.317 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:46.317 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.317 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.317 03:12:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:46.577 03:12:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:46.577 03:12:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:46.577 03:12:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:46.577 03:12:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.577 03:12:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.577 03:12:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:46.577 03:12:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:46.577 03:12:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.577 03:12:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:46.577 03:12:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 88322 00:12:46.577 03:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 88322 ']' 00:12:46.577 03:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 88322 00:12:46.577 03:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:12:46.577 03:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:46.577 03:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88322 00:12:46.577 03:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:46.577 03:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:46.577 killing process with pid 88322 00:12:46.577 Received shutdown signal, test time was about 60.000000 seconds 00:12:46.577 00:12:46.577 Latency(us) 00:12:46.577 [2024-11-18T03:12:50.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:46.577 [2024-11-18T03:12:50.154Z] =================================================================================================================== 00:12:46.577 [2024-11-18T03:12:50.154Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:46.577 03:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88322' 00:12:46.577 03:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 88322 00:12:46.577 [2024-11-18 03:12:50.096199] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:46.577 03:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 88322 00:12:46.577 [2024-11-18 03:12:50.147266] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:46.836 03:12:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:46.836 00:12:46.836 real 0m15.162s 00:12:46.836 user 0m17.467s 00:12:46.836 sys 0m2.919s 00:12:46.836 03:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:46.836 ************************************ 00:12:46.836 END TEST raid_rebuild_test 00:12:46.836 ************************************ 00:12:46.836 03:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.096 03:12:50 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:12:47.096 03:12:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:47.096 03:12:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:47.096 03:12:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:47.096 ************************************ 00:12:47.096 START TEST raid_rebuild_test_sb 00:12:47.096 ************************************ 00:12:47.096 03:12:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:12:47.096 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:47.096 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:47.096 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:47.096 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:47.096 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:47.096 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:47.096 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:47.096 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:47.096 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:47.097 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:47.097 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:47.097 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:47.097 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:47.097 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:47.097 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:47.097 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:47.097 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:47.097 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:47.097 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:47.097 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:47.097 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:47.097 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:47.097 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:47.097 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:47.097 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:47.097 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:47.097 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:47.097 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:47.097 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:47.097 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:47.097 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=88748 00:12:47.097 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:47.097 03:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 88748 00:12:47.097 03:12:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 88748 ']' 00:12:47.097 03:12:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.097 03:12:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:47.097 03:12:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.097 03:12:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:47.097 03:12:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.097 [2024-11-18 03:12:50.550515] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:47.097 [2024-11-18 03:12:50.550716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:12:47.097 Zero copy mechanism will not be used. 00:12:47.097 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88748 ] 00:12:47.357 [2024-11-18 03:12:50.712002] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.357 [2024-11-18 03:12:50.762510] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.357 [2024-11-18 03:12:50.804994] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:47.357 [2024-11-18 03:12:50.805106] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:47.927 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.928 BaseBdev1_malloc 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.928 [2024-11-18 03:12:51.391584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:47.928 [2024-11-18 03:12:51.391720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.928 [2024-11-18 03:12:51.391749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:47.928 [2024-11-18 03:12:51.391763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.928 [2024-11-18 03:12:51.393907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.928 [2024-11-18 03:12:51.393945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:47.928 BaseBdev1 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.928 BaseBdev2_malloc 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.928 [2024-11-18 03:12:51.429629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:47.928 [2024-11-18 03:12:51.429739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.928 [2024-11-18 03:12:51.429783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:47.928 [2024-11-18 03:12:51.429816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.928 [2024-11-18 03:12:51.432251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.928 [2024-11-18 03:12:51.432333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:47.928 BaseBdev2 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.928 BaseBdev3_malloc 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.928 [2024-11-18 03:12:51.458182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:47.928 [2024-11-18 03:12:51.458232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.928 [2024-11-18 03:12:51.458255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:47.928 [2024-11-18 03:12:51.458263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.928 [2024-11-18 03:12:51.460299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.928 [2024-11-18 03:12:51.460386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:47.928 BaseBdev3 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.928 BaseBdev4_malloc 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.928 [2024-11-18 03:12:51.486731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:47.928 [2024-11-18 03:12:51.486790] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.928 [2024-11-18 03:12:51.486816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:47.928 [2024-11-18 03:12:51.486825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.928 [2024-11-18 03:12:51.488886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.928 [2024-11-18 03:12:51.488922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:47.928 BaseBdev4 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.928 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.189 spare_malloc 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.189 spare_delay 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.189 [2024-11-18 03:12:51.527240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:48.189 [2024-11-18 03:12:51.527293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.189 [2024-11-18 03:12:51.527316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:48.189 [2024-11-18 03:12:51.527324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.189 [2024-11-18 03:12:51.529358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.189 [2024-11-18 03:12:51.529450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:48.189 spare 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.189 [2024-11-18 03:12:51.539296] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.189 [2024-11-18 03:12:51.541086] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:48.189 [2024-11-18 03:12:51.541154] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:48.189 [2024-11-18 03:12:51.541197] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:48.189 [2024-11-18 03:12:51.541361] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:48.189 [2024-11-18 03:12:51.541372] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:48.189 [2024-11-18 03:12:51.541635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:48.189 [2024-11-18 03:12:51.541772] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:48.189 [2024-11-18 03:12:51.541785] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:48.189 [2024-11-18 03:12:51.541914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.189 "name": "raid_bdev1", 00:12:48.189 "uuid": "e929e8d4-2ded-4287-a012-ba2d808b33f6", 00:12:48.189 "strip_size_kb": 0, 00:12:48.189 "state": "online", 00:12:48.189 "raid_level": "raid1", 00:12:48.189 "superblock": true, 00:12:48.189 "num_base_bdevs": 4, 00:12:48.189 "num_base_bdevs_discovered": 4, 00:12:48.189 "num_base_bdevs_operational": 4, 00:12:48.189 "base_bdevs_list": [ 00:12:48.189 { 00:12:48.189 "name": "BaseBdev1", 00:12:48.189 "uuid": "84554279-cfde-5601-a634-9ece990aebaf", 00:12:48.189 "is_configured": true, 00:12:48.189 "data_offset": 2048, 00:12:48.189 "data_size": 63488 00:12:48.189 }, 00:12:48.189 { 00:12:48.189 "name": "BaseBdev2", 00:12:48.189 "uuid": "408e412d-2b70-529e-b397-cfe2e705d838", 00:12:48.189 "is_configured": true, 00:12:48.189 "data_offset": 2048, 00:12:48.189 "data_size": 63488 00:12:48.189 }, 00:12:48.189 { 00:12:48.189 "name": "BaseBdev3", 00:12:48.189 "uuid": "5a55f10f-4241-5d41-b45b-83020395a2f3", 00:12:48.189 "is_configured": true, 00:12:48.189 "data_offset": 2048, 00:12:48.189 "data_size": 63488 00:12:48.189 }, 00:12:48.189 { 00:12:48.189 "name": "BaseBdev4", 00:12:48.189 "uuid": "62167589-0e1b-5757-b6e5-4ee420435c43", 00:12:48.189 "is_configured": true, 00:12:48.189 "data_offset": 2048, 00:12:48.189 "data_size": 63488 00:12:48.189 } 00:12:48.189 ] 00:12:48.189 }' 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.189 03:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.449 03:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:48.449 03:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:48.449 03:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.449 03:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.449 [2024-11-18 03:12:52.014812] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:48.710 03:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.710 03:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:48.710 03:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.710 03:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.710 03:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.710 03:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:48.710 03:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.710 03:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:48.710 03:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:48.710 03:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:48.710 03:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:48.710 03:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:48.710 03:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:48.710 03:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:48.710 03:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:48.710 03:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:48.710 03:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:48.710 03:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:48.710 03:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:48.710 03:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:48.710 03:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:48.710 [2024-11-18 03:12:52.258128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:48.710 /dev/nbd0 00:12:48.970 03:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:48.970 03:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:48.970 03:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:48.970 03:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:48.970 03:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:48.970 03:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:48.970 03:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:48.970 03:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:48.970 03:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:48.970 03:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:48.970 03:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.970 1+0 records in 00:12:48.970 1+0 records out 00:12:48.970 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324934 s, 12.6 MB/s 00:12:48.970 03:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.970 03:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:48.970 03:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.970 03:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:48.970 03:12:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:48.970 03:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:48.970 03:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:48.970 03:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:48.970 03:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:48.970 03:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:54.252 63488+0 records in 00:12:54.252 63488+0 records out 00:12:54.252 32505856 bytes (33 MB, 31 MiB) copied, 4.78672 s, 6.8 MB/s 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:54.252 [2024-11-18 03:12:57.328031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.252 [2024-11-18 03:12:57.349253] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.252 "name": "raid_bdev1", 00:12:54.252 "uuid": "e929e8d4-2ded-4287-a012-ba2d808b33f6", 00:12:54.252 "strip_size_kb": 0, 00:12:54.252 "state": "online", 00:12:54.252 "raid_level": "raid1", 00:12:54.252 "superblock": true, 00:12:54.252 "num_base_bdevs": 4, 00:12:54.252 "num_base_bdevs_discovered": 3, 00:12:54.252 "num_base_bdevs_operational": 3, 00:12:54.252 "base_bdevs_list": [ 00:12:54.252 { 00:12:54.252 "name": null, 00:12:54.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.252 "is_configured": false, 00:12:54.252 "data_offset": 0, 00:12:54.252 "data_size": 63488 00:12:54.252 }, 00:12:54.252 { 00:12:54.252 "name": "BaseBdev2", 00:12:54.252 "uuid": "408e412d-2b70-529e-b397-cfe2e705d838", 00:12:54.252 "is_configured": true, 00:12:54.252 "data_offset": 2048, 00:12:54.252 "data_size": 63488 00:12:54.252 }, 00:12:54.252 { 00:12:54.252 "name": "BaseBdev3", 00:12:54.252 "uuid": "5a55f10f-4241-5d41-b45b-83020395a2f3", 00:12:54.252 "is_configured": true, 00:12:54.252 "data_offset": 2048, 00:12:54.252 "data_size": 63488 00:12:54.252 }, 00:12:54.252 { 00:12:54.252 "name": "BaseBdev4", 00:12:54.252 "uuid": "62167589-0e1b-5757-b6e5-4ee420435c43", 00:12:54.252 "is_configured": true, 00:12:54.252 "data_offset": 2048, 00:12:54.252 "data_size": 63488 00:12:54.252 } 00:12:54.252 ] 00:12:54.252 }' 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.252 03:12:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.253 [2024-11-18 03:12:57.816485] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:54.253 [2024-11-18 03:12:57.819823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:54.253 [2024-11-18 03:12:57.821830] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:54.253 03:12:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.253 03:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:55.636 03:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:55.636 03:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.636 03:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:55.636 03:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:55.636 03:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.636 03:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.636 03:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.636 03:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.636 03:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.636 03:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.636 03:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.636 "name": "raid_bdev1", 00:12:55.636 "uuid": "e929e8d4-2ded-4287-a012-ba2d808b33f6", 00:12:55.636 "strip_size_kb": 0, 00:12:55.636 "state": "online", 00:12:55.636 "raid_level": "raid1", 00:12:55.636 "superblock": true, 00:12:55.636 "num_base_bdevs": 4, 00:12:55.636 "num_base_bdevs_discovered": 4, 00:12:55.636 "num_base_bdevs_operational": 4, 00:12:55.636 "process": { 00:12:55.636 "type": "rebuild", 00:12:55.636 "target": "spare", 00:12:55.636 "progress": { 00:12:55.636 "blocks": 20480, 00:12:55.636 "percent": 32 00:12:55.636 } 00:12:55.636 }, 00:12:55.636 "base_bdevs_list": [ 00:12:55.636 { 00:12:55.636 "name": "spare", 00:12:55.636 "uuid": "e7ce309e-02e2-542e-b117-45fc00b9d0df", 00:12:55.636 "is_configured": true, 00:12:55.636 "data_offset": 2048, 00:12:55.636 "data_size": 63488 00:12:55.636 }, 00:12:55.636 { 00:12:55.636 "name": "BaseBdev2", 00:12:55.636 "uuid": "408e412d-2b70-529e-b397-cfe2e705d838", 00:12:55.636 "is_configured": true, 00:12:55.636 "data_offset": 2048, 00:12:55.636 "data_size": 63488 00:12:55.636 }, 00:12:55.636 { 00:12:55.636 "name": "BaseBdev3", 00:12:55.636 "uuid": "5a55f10f-4241-5d41-b45b-83020395a2f3", 00:12:55.636 "is_configured": true, 00:12:55.636 "data_offset": 2048, 00:12:55.636 "data_size": 63488 00:12:55.636 }, 00:12:55.636 { 00:12:55.636 "name": "BaseBdev4", 00:12:55.636 "uuid": "62167589-0e1b-5757-b6e5-4ee420435c43", 00:12:55.636 "is_configured": true, 00:12:55.636 "data_offset": 2048, 00:12:55.636 "data_size": 63488 00:12:55.636 } 00:12:55.636 ] 00:12:55.636 }' 00:12:55.636 03:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.636 03:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:55.636 03:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.636 03:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:55.636 03:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:55.636 03:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.636 03:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.636 [2024-11-18 03:12:58.988832] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:55.636 [2024-11-18 03:12:59.026558] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:55.636 [2024-11-18 03:12:59.026683] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.636 [2024-11-18 03:12:59.026722] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:55.636 [2024-11-18 03:12:59.026743] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:55.636 03:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.636 03:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:55.636 03:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.636 03:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.636 03:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.636 03:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.636 03:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.636 03:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.636 03:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.636 03:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.636 03:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.636 03:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.636 03:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.636 03:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.636 03:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.636 03:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.636 03:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.636 "name": "raid_bdev1", 00:12:55.636 "uuid": "e929e8d4-2ded-4287-a012-ba2d808b33f6", 00:12:55.636 "strip_size_kb": 0, 00:12:55.636 "state": "online", 00:12:55.636 "raid_level": "raid1", 00:12:55.636 "superblock": true, 00:12:55.636 "num_base_bdevs": 4, 00:12:55.636 "num_base_bdevs_discovered": 3, 00:12:55.636 "num_base_bdevs_operational": 3, 00:12:55.636 "base_bdevs_list": [ 00:12:55.636 { 00:12:55.636 "name": null, 00:12:55.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.636 "is_configured": false, 00:12:55.636 "data_offset": 0, 00:12:55.636 "data_size": 63488 00:12:55.636 }, 00:12:55.636 { 00:12:55.636 "name": "BaseBdev2", 00:12:55.636 "uuid": "408e412d-2b70-529e-b397-cfe2e705d838", 00:12:55.636 "is_configured": true, 00:12:55.636 "data_offset": 2048, 00:12:55.636 "data_size": 63488 00:12:55.636 }, 00:12:55.636 { 00:12:55.636 "name": "BaseBdev3", 00:12:55.636 "uuid": "5a55f10f-4241-5d41-b45b-83020395a2f3", 00:12:55.637 "is_configured": true, 00:12:55.637 "data_offset": 2048, 00:12:55.637 "data_size": 63488 00:12:55.637 }, 00:12:55.637 { 00:12:55.637 "name": "BaseBdev4", 00:12:55.637 "uuid": "62167589-0e1b-5757-b6e5-4ee420435c43", 00:12:55.637 "is_configured": true, 00:12:55.637 "data_offset": 2048, 00:12:55.637 "data_size": 63488 00:12:55.637 } 00:12:55.637 ] 00:12:55.637 }' 00:12:55.637 03:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.637 03:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.205 03:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:56.205 03:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.205 03:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:56.205 03:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:56.205 03:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.205 03:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.205 03:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.205 03:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.205 03:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.205 03:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.205 03:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.205 "name": "raid_bdev1", 00:12:56.205 "uuid": "e929e8d4-2ded-4287-a012-ba2d808b33f6", 00:12:56.205 "strip_size_kb": 0, 00:12:56.205 "state": "online", 00:12:56.205 "raid_level": "raid1", 00:12:56.205 "superblock": true, 00:12:56.205 "num_base_bdevs": 4, 00:12:56.205 "num_base_bdevs_discovered": 3, 00:12:56.205 "num_base_bdevs_operational": 3, 00:12:56.205 "base_bdevs_list": [ 00:12:56.205 { 00:12:56.205 "name": null, 00:12:56.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.205 "is_configured": false, 00:12:56.205 "data_offset": 0, 00:12:56.205 "data_size": 63488 00:12:56.205 }, 00:12:56.205 { 00:12:56.205 "name": "BaseBdev2", 00:12:56.205 "uuid": "408e412d-2b70-529e-b397-cfe2e705d838", 00:12:56.205 "is_configured": true, 00:12:56.205 "data_offset": 2048, 00:12:56.205 "data_size": 63488 00:12:56.205 }, 00:12:56.205 { 00:12:56.205 "name": "BaseBdev3", 00:12:56.205 "uuid": "5a55f10f-4241-5d41-b45b-83020395a2f3", 00:12:56.205 "is_configured": true, 00:12:56.205 "data_offset": 2048, 00:12:56.205 "data_size": 63488 00:12:56.205 }, 00:12:56.205 { 00:12:56.205 "name": "BaseBdev4", 00:12:56.205 "uuid": "62167589-0e1b-5757-b6e5-4ee420435c43", 00:12:56.205 "is_configured": true, 00:12:56.205 "data_offset": 2048, 00:12:56.205 "data_size": 63488 00:12:56.205 } 00:12:56.205 ] 00:12:56.205 }' 00:12:56.205 03:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.205 03:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:56.205 03:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.205 03:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:56.205 03:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:56.205 03:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.205 03:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.205 [2024-11-18 03:12:59.645821] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:56.205 [2024-11-18 03:12:59.649197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:56.205 [2024-11-18 03:12:59.651126] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:56.205 03:12:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.205 03:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:57.145 03:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:57.145 03:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.145 03:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:57.145 03:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:57.145 03:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.145 03:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.145 03:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.145 03:13:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.145 03:13:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.145 03:13:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.145 03:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.145 "name": "raid_bdev1", 00:12:57.145 "uuid": "e929e8d4-2ded-4287-a012-ba2d808b33f6", 00:12:57.145 "strip_size_kb": 0, 00:12:57.145 "state": "online", 00:12:57.145 "raid_level": "raid1", 00:12:57.145 "superblock": true, 00:12:57.145 "num_base_bdevs": 4, 00:12:57.145 "num_base_bdevs_discovered": 4, 00:12:57.145 "num_base_bdevs_operational": 4, 00:12:57.145 "process": { 00:12:57.145 "type": "rebuild", 00:12:57.145 "target": "spare", 00:12:57.145 "progress": { 00:12:57.145 "blocks": 20480, 00:12:57.145 "percent": 32 00:12:57.145 } 00:12:57.145 }, 00:12:57.145 "base_bdevs_list": [ 00:12:57.145 { 00:12:57.145 "name": "spare", 00:12:57.145 "uuid": "e7ce309e-02e2-542e-b117-45fc00b9d0df", 00:12:57.145 "is_configured": true, 00:12:57.145 "data_offset": 2048, 00:12:57.145 "data_size": 63488 00:12:57.145 }, 00:12:57.145 { 00:12:57.145 "name": "BaseBdev2", 00:12:57.145 "uuid": "408e412d-2b70-529e-b397-cfe2e705d838", 00:12:57.145 "is_configured": true, 00:12:57.145 "data_offset": 2048, 00:12:57.145 "data_size": 63488 00:12:57.145 }, 00:12:57.145 { 00:12:57.145 "name": "BaseBdev3", 00:12:57.145 "uuid": "5a55f10f-4241-5d41-b45b-83020395a2f3", 00:12:57.145 "is_configured": true, 00:12:57.145 "data_offset": 2048, 00:12:57.145 "data_size": 63488 00:12:57.145 }, 00:12:57.145 { 00:12:57.145 "name": "BaseBdev4", 00:12:57.145 "uuid": "62167589-0e1b-5757-b6e5-4ee420435c43", 00:12:57.145 "is_configured": true, 00:12:57.145 "data_offset": 2048, 00:12:57.145 "data_size": 63488 00:12:57.145 } 00:12:57.145 ] 00:12:57.145 }' 00:12:57.145 03:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.405 03:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:57.405 03:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.405 03:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:57.405 03:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:57.405 03:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:57.405 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:57.405 03:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:57.405 03:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:57.405 03:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:57.405 03:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:57.405 03:13:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.405 03:13:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.405 [2024-11-18 03:13:00.817863] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:57.405 [2024-11-18 03:13:00.955124] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca3430 00:12:57.405 03:13:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.405 03:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:57.405 03:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:57.405 03:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:57.405 03:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.405 03:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:57.405 03:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:57.405 03:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.405 03:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.405 03:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.405 03:13:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.405 03:13:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.666 03:13:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.666 03:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.666 "name": "raid_bdev1", 00:12:57.666 "uuid": "e929e8d4-2ded-4287-a012-ba2d808b33f6", 00:12:57.666 "strip_size_kb": 0, 00:12:57.666 "state": "online", 00:12:57.666 "raid_level": "raid1", 00:12:57.666 "superblock": true, 00:12:57.666 "num_base_bdevs": 4, 00:12:57.666 "num_base_bdevs_discovered": 3, 00:12:57.666 "num_base_bdevs_operational": 3, 00:12:57.666 "process": { 00:12:57.666 "type": "rebuild", 00:12:57.666 "target": "spare", 00:12:57.666 "progress": { 00:12:57.666 "blocks": 24576, 00:12:57.666 "percent": 38 00:12:57.666 } 00:12:57.666 }, 00:12:57.666 "base_bdevs_list": [ 00:12:57.666 { 00:12:57.666 "name": "spare", 00:12:57.666 "uuid": "e7ce309e-02e2-542e-b117-45fc00b9d0df", 00:12:57.666 "is_configured": true, 00:12:57.666 "data_offset": 2048, 00:12:57.666 "data_size": 63488 00:12:57.666 }, 00:12:57.666 { 00:12:57.666 "name": null, 00:12:57.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.666 "is_configured": false, 00:12:57.666 "data_offset": 0, 00:12:57.666 "data_size": 63488 00:12:57.666 }, 00:12:57.666 { 00:12:57.666 "name": "BaseBdev3", 00:12:57.666 "uuid": "5a55f10f-4241-5d41-b45b-83020395a2f3", 00:12:57.666 "is_configured": true, 00:12:57.666 "data_offset": 2048, 00:12:57.666 "data_size": 63488 00:12:57.666 }, 00:12:57.666 { 00:12:57.666 "name": "BaseBdev4", 00:12:57.666 "uuid": "62167589-0e1b-5757-b6e5-4ee420435c43", 00:12:57.666 "is_configured": true, 00:12:57.666 "data_offset": 2048, 00:12:57.666 "data_size": 63488 00:12:57.666 } 00:12:57.666 ] 00:12:57.666 }' 00:12:57.666 03:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.666 03:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:57.666 03:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.666 03:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:57.666 03:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=375 00:12:57.666 03:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:57.666 03:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:57.666 03:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.666 03:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:57.666 03:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:57.666 03:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.666 03:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.666 03:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.666 03:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.666 03:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.666 03:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.666 03:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.666 "name": "raid_bdev1", 00:12:57.666 "uuid": "e929e8d4-2ded-4287-a012-ba2d808b33f6", 00:12:57.666 "strip_size_kb": 0, 00:12:57.666 "state": "online", 00:12:57.666 "raid_level": "raid1", 00:12:57.666 "superblock": true, 00:12:57.666 "num_base_bdevs": 4, 00:12:57.666 "num_base_bdevs_discovered": 3, 00:12:57.666 "num_base_bdevs_operational": 3, 00:12:57.666 "process": { 00:12:57.666 "type": "rebuild", 00:12:57.666 "target": "spare", 00:12:57.666 "progress": { 00:12:57.666 "blocks": 26624, 00:12:57.666 "percent": 41 00:12:57.666 } 00:12:57.666 }, 00:12:57.666 "base_bdevs_list": [ 00:12:57.666 { 00:12:57.666 "name": "spare", 00:12:57.666 "uuid": "e7ce309e-02e2-542e-b117-45fc00b9d0df", 00:12:57.666 "is_configured": true, 00:12:57.666 "data_offset": 2048, 00:12:57.666 "data_size": 63488 00:12:57.666 }, 00:12:57.666 { 00:12:57.666 "name": null, 00:12:57.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.666 "is_configured": false, 00:12:57.666 "data_offset": 0, 00:12:57.666 "data_size": 63488 00:12:57.666 }, 00:12:57.666 { 00:12:57.666 "name": "BaseBdev3", 00:12:57.666 "uuid": "5a55f10f-4241-5d41-b45b-83020395a2f3", 00:12:57.666 "is_configured": true, 00:12:57.666 "data_offset": 2048, 00:12:57.666 "data_size": 63488 00:12:57.666 }, 00:12:57.666 { 00:12:57.666 "name": "BaseBdev4", 00:12:57.666 "uuid": "62167589-0e1b-5757-b6e5-4ee420435c43", 00:12:57.666 "is_configured": true, 00:12:57.666 "data_offset": 2048, 00:12:57.666 "data_size": 63488 00:12:57.666 } 00:12:57.666 ] 00:12:57.666 }' 00:12:57.666 03:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.666 03:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:57.666 03:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.666 03:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:57.666 03:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:59.056 03:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:59.056 03:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:59.056 03:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.056 03:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:59.056 03:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:59.056 03:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.056 03:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.056 03:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.056 03:13:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.056 03:13:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.056 03:13:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.056 03:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.056 "name": "raid_bdev1", 00:12:59.056 "uuid": "e929e8d4-2ded-4287-a012-ba2d808b33f6", 00:12:59.056 "strip_size_kb": 0, 00:12:59.056 "state": "online", 00:12:59.056 "raid_level": "raid1", 00:12:59.056 "superblock": true, 00:12:59.056 "num_base_bdevs": 4, 00:12:59.056 "num_base_bdevs_discovered": 3, 00:12:59.056 "num_base_bdevs_operational": 3, 00:12:59.056 "process": { 00:12:59.056 "type": "rebuild", 00:12:59.056 "target": "spare", 00:12:59.056 "progress": { 00:12:59.056 "blocks": 49152, 00:12:59.056 "percent": 77 00:12:59.056 } 00:12:59.056 }, 00:12:59.056 "base_bdevs_list": [ 00:12:59.056 { 00:12:59.056 "name": "spare", 00:12:59.056 "uuid": "e7ce309e-02e2-542e-b117-45fc00b9d0df", 00:12:59.056 "is_configured": true, 00:12:59.056 "data_offset": 2048, 00:12:59.056 "data_size": 63488 00:12:59.056 }, 00:12:59.056 { 00:12:59.056 "name": null, 00:12:59.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.056 "is_configured": false, 00:12:59.056 "data_offset": 0, 00:12:59.056 "data_size": 63488 00:12:59.056 }, 00:12:59.056 { 00:12:59.056 "name": "BaseBdev3", 00:12:59.056 "uuid": "5a55f10f-4241-5d41-b45b-83020395a2f3", 00:12:59.056 "is_configured": true, 00:12:59.056 "data_offset": 2048, 00:12:59.056 "data_size": 63488 00:12:59.056 }, 00:12:59.056 { 00:12:59.056 "name": "BaseBdev4", 00:12:59.056 "uuid": "62167589-0e1b-5757-b6e5-4ee420435c43", 00:12:59.056 "is_configured": true, 00:12:59.056 "data_offset": 2048, 00:12:59.056 "data_size": 63488 00:12:59.056 } 00:12:59.056 ] 00:12:59.056 }' 00:12:59.056 03:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.056 03:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:59.056 03:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.056 03:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:59.056 03:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:59.316 [2024-11-18 03:13:02.862313] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:59.316 [2024-11-18 03:13:02.862410] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:59.316 [2024-11-18 03:13:02.862517] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.887 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:59.887 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:59.887 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.887 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:59.887 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:59.887 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.887 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.887 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.887 03:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.887 03:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.887 03:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.887 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.887 "name": "raid_bdev1", 00:12:59.887 "uuid": "e929e8d4-2ded-4287-a012-ba2d808b33f6", 00:12:59.887 "strip_size_kb": 0, 00:12:59.887 "state": "online", 00:12:59.887 "raid_level": "raid1", 00:12:59.887 "superblock": true, 00:12:59.887 "num_base_bdevs": 4, 00:12:59.887 "num_base_bdevs_discovered": 3, 00:12:59.887 "num_base_bdevs_operational": 3, 00:12:59.887 "base_bdevs_list": [ 00:12:59.887 { 00:12:59.887 "name": "spare", 00:12:59.887 "uuid": "e7ce309e-02e2-542e-b117-45fc00b9d0df", 00:12:59.887 "is_configured": true, 00:12:59.887 "data_offset": 2048, 00:12:59.887 "data_size": 63488 00:12:59.887 }, 00:12:59.887 { 00:12:59.887 "name": null, 00:12:59.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.887 "is_configured": false, 00:12:59.887 "data_offset": 0, 00:12:59.887 "data_size": 63488 00:12:59.887 }, 00:12:59.887 { 00:12:59.887 "name": "BaseBdev3", 00:12:59.887 "uuid": "5a55f10f-4241-5d41-b45b-83020395a2f3", 00:12:59.887 "is_configured": true, 00:12:59.887 "data_offset": 2048, 00:12:59.887 "data_size": 63488 00:12:59.887 }, 00:12:59.887 { 00:12:59.887 "name": "BaseBdev4", 00:12:59.887 "uuid": "62167589-0e1b-5757-b6e5-4ee420435c43", 00:12:59.887 "is_configured": true, 00:12:59.887 "data_offset": 2048, 00:12:59.887 "data_size": 63488 00:12:59.887 } 00:12:59.887 ] 00:12:59.887 }' 00:12:59.887 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.148 "name": "raid_bdev1", 00:13:00.148 "uuid": "e929e8d4-2ded-4287-a012-ba2d808b33f6", 00:13:00.148 "strip_size_kb": 0, 00:13:00.148 "state": "online", 00:13:00.148 "raid_level": "raid1", 00:13:00.148 "superblock": true, 00:13:00.148 "num_base_bdevs": 4, 00:13:00.148 "num_base_bdevs_discovered": 3, 00:13:00.148 "num_base_bdevs_operational": 3, 00:13:00.148 "base_bdevs_list": [ 00:13:00.148 { 00:13:00.148 "name": "spare", 00:13:00.148 "uuid": "e7ce309e-02e2-542e-b117-45fc00b9d0df", 00:13:00.148 "is_configured": true, 00:13:00.148 "data_offset": 2048, 00:13:00.148 "data_size": 63488 00:13:00.148 }, 00:13:00.148 { 00:13:00.148 "name": null, 00:13:00.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.148 "is_configured": false, 00:13:00.148 "data_offset": 0, 00:13:00.148 "data_size": 63488 00:13:00.148 }, 00:13:00.148 { 00:13:00.148 "name": "BaseBdev3", 00:13:00.148 "uuid": "5a55f10f-4241-5d41-b45b-83020395a2f3", 00:13:00.148 "is_configured": true, 00:13:00.148 "data_offset": 2048, 00:13:00.148 "data_size": 63488 00:13:00.148 }, 00:13:00.148 { 00:13:00.148 "name": "BaseBdev4", 00:13:00.148 "uuid": "62167589-0e1b-5757-b6e5-4ee420435c43", 00:13:00.148 "is_configured": true, 00:13:00.148 "data_offset": 2048, 00:13:00.148 "data_size": 63488 00:13:00.148 } 00:13:00.148 ] 00:13:00.148 }' 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.148 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.148 "name": "raid_bdev1", 00:13:00.149 "uuid": "e929e8d4-2ded-4287-a012-ba2d808b33f6", 00:13:00.149 "strip_size_kb": 0, 00:13:00.149 "state": "online", 00:13:00.149 "raid_level": "raid1", 00:13:00.149 "superblock": true, 00:13:00.149 "num_base_bdevs": 4, 00:13:00.149 "num_base_bdevs_discovered": 3, 00:13:00.149 "num_base_bdevs_operational": 3, 00:13:00.149 "base_bdevs_list": [ 00:13:00.149 { 00:13:00.149 "name": "spare", 00:13:00.149 "uuid": "e7ce309e-02e2-542e-b117-45fc00b9d0df", 00:13:00.149 "is_configured": true, 00:13:00.149 "data_offset": 2048, 00:13:00.149 "data_size": 63488 00:13:00.149 }, 00:13:00.149 { 00:13:00.149 "name": null, 00:13:00.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.149 "is_configured": false, 00:13:00.149 "data_offset": 0, 00:13:00.149 "data_size": 63488 00:13:00.149 }, 00:13:00.149 { 00:13:00.149 "name": "BaseBdev3", 00:13:00.149 "uuid": "5a55f10f-4241-5d41-b45b-83020395a2f3", 00:13:00.149 "is_configured": true, 00:13:00.149 "data_offset": 2048, 00:13:00.149 "data_size": 63488 00:13:00.149 }, 00:13:00.149 { 00:13:00.149 "name": "BaseBdev4", 00:13:00.149 "uuid": "62167589-0e1b-5757-b6e5-4ee420435c43", 00:13:00.149 "is_configured": true, 00:13:00.149 "data_offset": 2048, 00:13:00.149 "data_size": 63488 00:13:00.149 } 00:13:00.149 ] 00:13:00.149 }' 00:13:00.149 03:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.149 03:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.751 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:00.751 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.751 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.751 [2024-11-18 03:13:04.148161] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:00.751 [2024-11-18 03:13:04.148194] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:00.751 [2024-11-18 03:13:04.148284] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:00.751 [2024-11-18 03:13:04.148358] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:00.751 [2024-11-18 03:13:04.148376] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:00.751 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.751 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.751 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.751 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.751 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:00.751 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.751 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:00.751 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:00.751 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:00.751 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:00.752 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:00.752 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:00.752 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:00.752 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:00.752 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:00.752 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:00.752 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:00.752 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:00.752 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:01.028 /dev/nbd0 00:13:01.028 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:01.028 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:01.028 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:01.028 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:01.028 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:01.028 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:01.028 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:01.028 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:01.028 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:01.028 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:01.028 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.028 1+0 records in 00:13:01.028 1+0 records out 00:13:01.028 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264337 s, 15.5 MB/s 00:13:01.028 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.028 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:01.028 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.028 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:01.028 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:01.028 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:01.028 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:01.028 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:01.288 /dev/nbd1 00:13:01.288 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:01.288 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:01.288 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:01.288 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:01.288 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:01.288 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:01.288 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:01.288 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:01.288 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:01.288 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:01.288 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.288 1+0 records in 00:13:01.288 1+0 records out 00:13:01.288 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000520418 s, 7.9 MB/s 00:13:01.288 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.288 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:01.288 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.288 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:01.288 03:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:01.288 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:01.288 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:01.288 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:01.288 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:01.288 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:01.288 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:01.288 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:01.288 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:01.288 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.288 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:01.549 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:01.549 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:01.549 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:01.549 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.549 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.549 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:01.549 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:01.549 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.549 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.549 03:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:01.809 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:01.809 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:01.809 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:01.809 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.809 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.809 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:01.809 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:01.809 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.809 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:01.809 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:01.809 03:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.809 03:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.809 03:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.809 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:01.809 03:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.809 03:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.809 [2024-11-18 03:13:05.226648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:01.809 [2024-11-18 03:13:05.226756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.809 [2024-11-18 03:13:05.226795] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:01.809 [2024-11-18 03:13:05.226808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.809 [2024-11-18 03:13:05.229158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.809 [2024-11-18 03:13:05.229197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:01.809 [2024-11-18 03:13:05.229283] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:01.809 [2024-11-18 03:13:05.229319] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:01.809 [2024-11-18 03:13:05.229416] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:01.809 [2024-11-18 03:13:05.229502] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:01.809 spare 00:13:01.809 03:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.809 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:01.809 03:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.809 03:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.809 [2024-11-18 03:13:05.329398] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:13:01.809 [2024-11-18 03:13:05.329479] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:01.809 [2024-11-18 03:13:05.329810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:13:01.809 [2024-11-18 03:13:05.330016] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:13:01.809 [2024-11-18 03:13:05.330059] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:13:01.809 [2024-11-18 03:13:05.330234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.809 03:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.810 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:01.810 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.810 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.810 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.810 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.810 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:01.810 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.810 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.810 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.810 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.810 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.810 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.810 03:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.810 03:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.810 03:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.070 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.070 "name": "raid_bdev1", 00:13:02.070 "uuid": "e929e8d4-2ded-4287-a012-ba2d808b33f6", 00:13:02.070 "strip_size_kb": 0, 00:13:02.070 "state": "online", 00:13:02.070 "raid_level": "raid1", 00:13:02.070 "superblock": true, 00:13:02.070 "num_base_bdevs": 4, 00:13:02.070 "num_base_bdevs_discovered": 3, 00:13:02.070 "num_base_bdevs_operational": 3, 00:13:02.070 "base_bdevs_list": [ 00:13:02.070 { 00:13:02.070 "name": "spare", 00:13:02.070 "uuid": "e7ce309e-02e2-542e-b117-45fc00b9d0df", 00:13:02.070 "is_configured": true, 00:13:02.070 "data_offset": 2048, 00:13:02.070 "data_size": 63488 00:13:02.070 }, 00:13:02.070 { 00:13:02.070 "name": null, 00:13:02.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.070 "is_configured": false, 00:13:02.070 "data_offset": 2048, 00:13:02.070 "data_size": 63488 00:13:02.070 }, 00:13:02.070 { 00:13:02.070 "name": "BaseBdev3", 00:13:02.070 "uuid": "5a55f10f-4241-5d41-b45b-83020395a2f3", 00:13:02.070 "is_configured": true, 00:13:02.070 "data_offset": 2048, 00:13:02.070 "data_size": 63488 00:13:02.070 }, 00:13:02.070 { 00:13:02.070 "name": "BaseBdev4", 00:13:02.070 "uuid": "62167589-0e1b-5757-b6e5-4ee420435c43", 00:13:02.070 "is_configured": true, 00:13:02.070 "data_offset": 2048, 00:13:02.070 "data_size": 63488 00:13:02.070 } 00:13:02.070 ] 00:13:02.070 }' 00:13:02.070 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.070 03:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.330 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:02.331 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.331 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:02.331 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:02.331 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.331 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.331 03:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.331 03:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.331 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.331 03:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.331 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.331 "name": "raid_bdev1", 00:13:02.331 "uuid": "e929e8d4-2ded-4287-a012-ba2d808b33f6", 00:13:02.331 "strip_size_kb": 0, 00:13:02.331 "state": "online", 00:13:02.331 "raid_level": "raid1", 00:13:02.331 "superblock": true, 00:13:02.331 "num_base_bdevs": 4, 00:13:02.331 "num_base_bdevs_discovered": 3, 00:13:02.331 "num_base_bdevs_operational": 3, 00:13:02.331 "base_bdevs_list": [ 00:13:02.331 { 00:13:02.331 "name": "spare", 00:13:02.331 "uuid": "e7ce309e-02e2-542e-b117-45fc00b9d0df", 00:13:02.331 "is_configured": true, 00:13:02.331 "data_offset": 2048, 00:13:02.331 "data_size": 63488 00:13:02.331 }, 00:13:02.331 { 00:13:02.331 "name": null, 00:13:02.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.331 "is_configured": false, 00:13:02.331 "data_offset": 2048, 00:13:02.331 "data_size": 63488 00:13:02.331 }, 00:13:02.331 { 00:13:02.331 "name": "BaseBdev3", 00:13:02.331 "uuid": "5a55f10f-4241-5d41-b45b-83020395a2f3", 00:13:02.331 "is_configured": true, 00:13:02.331 "data_offset": 2048, 00:13:02.331 "data_size": 63488 00:13:02.331 }, 00:13:02.331 { 00:13:02.331 "name": "BaseBdev4", 00:13:02.331 "uuid": "62167589-0e1b-5757-b6e5-4ee420435c43", 00:13:02.331 "is_configured": true, 00:13:02.331 "data_offset": 2048, 00:13:02.331 "data_size": 63488 00:13:02.331 } 00:13:02.331 ] 00:13:02.331 }' 00:13:02.331 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.331 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:02.331 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.592 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:02.592 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.592 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:02.592 03:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.592 03:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.592 03:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.592 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.592 03:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:02.592 03:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.592 03:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.592 [2024-11-18 03:13:05.997380] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:02.592 03:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.592 03:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:02.592 03:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.592 03:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.592 03:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.592 03:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.592 03:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:02.592 03:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.592 03:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.592 03:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.592 03:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.592 03:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.592 03:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.592 03:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.592 03:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.592 03:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.592 03:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.592 "name": "raid_bdev1", 00:13:02.592 "uuid": "e929e8d4-2ded-4287-a012-ba2d808b33f6", 00:13:02.592 "strip_size_kb": 0, 00:13:02.592 "state": "online", 00:13:02.592 "raid_level": "raid1", 00:13:02.592 "superblock": true, 00:13:02.592 "num_base_bdevs": 4, 00:13:02.592 "num_base_bdevs_discovered": 2, 00:13:02.592 "num_base_bdevs_operational": 2, 00:13:02.592 "base_bdevs_list": [ 00:13:02.592 { 00:13:02.592 "name": null, 00:13:02.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.592 "is_configured": false, 00:13:02.592 "data_offset": 0, 00:13:02.592 "data_size": 63488 00:13:02.592 }, 00:13:02.592 { 00:13:02.592 "name": null, 00:13:02.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.592 "is_configured": false, 00:13:02.592 "data_offset": 2048, 00:13:02.592 "data_size": 63488 00:13:02.592 }, 00:13:02.592 { 00:13:02.592 "name": "BaseBdev3", 00:13:02.592 "uuid": "5a55f10f-4241-5d41-b45b-83020395a2f3", 00:13:02.592 "is_configured": true, 00:13:02.592 "data_offset": 2048, 00:13:02.592 "data_size": 63488 00:13:02.592 }, 00:13:02.592 { 00:13:02.592 "name": "BaseBdev4", 00:13:02.592 "uuid": "62167589-0e1b-5757-b6e5-4ee420435c43", 00:13:02.592 "is_configured": true, 00:13:02.592 "data_offset": 2048, 00:13:02.592 "data_size": 63488 00:13:02.592 } 00:13:02.592 ] 00:13:02.592 }' 00:13:02.592 03:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.592 03:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.163 03:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:03.163 03:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.163 03:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.163 [2024-11-18 03:13:06.488582] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:03.163 [2024-11-18 03:13:06.488760] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:03.163 [2024-11-18 03:13:06.488778] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:03.163 [2024-11-18 03:13:06.488823] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:03.163 [2024-11-18 03:13:06.492057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:13:03.163 [2024-11-18 03:13:06.493950] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:03.163 03:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.163 03:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:04.103 03:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:04.103 03:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.103 03:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:04.103 03:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:04.103 03:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.103 03:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.104 03:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.104 03:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.104 03:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.104 03:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.104 03:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.104 "name": "raid_bdev1", 00:13:04.104 "uuid": "e929e8d4-2ded-4287-a012-ba2d808b33f6", 00:13:04.104 "strip_size_kb": 0, 00:13:04.104 "state": "online", 00:13:04.104 "raid_level": "raid1", 00:13:04.104 "superblock": true, 00:13:04.104 "num_base_bdevs": 4, 00:13:04.104 "num_base_bdevs_discovered": 3, 00:13:04.104 "num_base_bdevs_operational": 3, 00:13:04.104 "process": { 00:13:04.104 "type": "rebuild", 00:13:04.104 "target": "spare", 00:13:04.104 "progress": { 00:13:04.104 "blocks": 20480, 00:13:04.104 "percent": 32 00:13:04.104 } 00:13:04.104 }, 00:13:04.104 "base_bdevs_list": [ 00:13:04.104 { 00:13:04.104 "name": "spare", 00:13:04.104 "uuid": "e7ce309e-02e2-542e-b117-45fc00b9d0df", 00:13:04.104 "is_configured": true, 00:13:04.104 "data_offset": 2048, 00:13:04.104 "data_size": 63488 00:13:04.104 }, 00:13:04.104 { 00:13:04.104 "name": null, 00:13:04.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.104 "is_configured": false, 00:13:04.104 "data_offset": 2048, 00:13:04.104 "data_size": 63488 00:13:04.104 }, 00:13:04.104 { 00:13:04.104 "name": "BaseBdev3", 00:13:04.104 "uuid": "5a55f10f-4241-5d41-b45b-83020395a2f3", 00:13:04.104 "is_configured": true, 00:13:04.104 "data_offset": 2048, 00:13:04.104 "data_size": 63488 00:13:04.104 }, 00:13:04.104 { 00:13:04.104 "name": "BaseBdev4", 00:13:04.104 "uuid": "62167589-0e1b-5757-b6e5-4ee420435c43", 00:13:04.104 "is_configured": true, 00:13:04.104 "data_offset": 2048, 00:13:04.104 "data_size": 63488 00:13:04.104 } 00:13:04.104 ] 00:13:04.104 }' 00:13:04.104 03:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.104 03:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:04.104 03:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.104 03:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:04.104 03:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:04.104 03:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.104 03:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.104 [2024-11-18 03:13:07.668883] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:04.364 [2024-11-18 03:13:07.698287] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:04.364 [2024-11-18 03:13:07.698407] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.364 [2024-11-18 03:13:07.698442] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:04.364 [2024-11-18 03:13:07.698465] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:04.364 03:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.364 03:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:04.364 03:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.364 03:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.364 03:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.364 03:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.364 03:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:04.364 03:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.364 03:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.364 03:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.364 03:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.364 03:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.364 03:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.364 03:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.364 03:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.364 03:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.364 03:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.364 "name": "raid_bdev1", 00:13:04.364 "uuid": "e929e8d4-2ded-4287-a012-ba2d808b33f6", 00:13:04.364 "strip_size_kb": 0, 00:13:04.364 "state": "online", 00:13:04.364 "raid_level": "raid1", 00:13:04.364 "superblock": true, 00:13:04.364 "num_base_bdevs": 4, 00:13:04.364 "num_base_bdevs_discovered": 2, 00:13:04.364 "num_base_bdevs_operational": 2, 00:13:04.364 "base_bdevs_list": [ 00:13:04.364 { 00:13:04.364 "name": null, 00:13:04.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.364 "is_configured": false, 00:13:04.364 "data_offset": 0, 00:13:04.364 "data_size": 63488 00:13:04.364 }, 00:13:04.364 { 00:13:04.364 "name": null, 00:13:04.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.364 "is_configured": false, 00:13:04.364 "data_offset": 2048, 00:13:04.365 "data_size": 63488 00:13:04.365 }, 00:13:04.365 { 00:13:04.365 "name": "BaseBdev3", 00:13:04.365 "uuid": "5a55f10f-4241-5d41-b45b-83020395a2f3", 00:13:04.365 "is_configured": true, 00:13:04.365 "data_offset": 2048, 00:13:04.365 "data_size": 63488 00:13:04.365 }, 00:13:04.365 { 00:13:04.365 "name": "BaseBdev4", 00:13:04.365 "uuid": "62167589-0e1b-5757-b6e5-4ee420435c43", 00:13:04.365 "is_configured": true, 00:13:04.365 "data_offset": 2048, 00:13:04.365 "data_size": 63488 00:13:04.365 } 00:13:04.365 ] 00:13:04.365 }' 00:13:04.365 03:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.365 03:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.625 03:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:04.625 03:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.625 03:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.625 [2024-11-18 03:13:08.169625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:04.625 [2024-11-18 03:13:08.169763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.625 [2024-11-18 03:13:08.169810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:13:04.625 [2024-11-18 03:13:08.169844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.625 [2024-11-18 03:13:08.170349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.625 [2024-11-18 03:13:08.170416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:04.625 [2024-11-18 03:13:08.170538] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:04.625 [2024-11-18 03:13:08.170589] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:04.625 [2024-11-18 03:13:08.170636] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:04.625 [2024-11-18 03:13:08.170719] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:04.625 [2024-11-18 03:13:08.174009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:04.625 spare 00:13:04.625 03:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.625 03:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:04.625 [2024-11-18 03:13:08.176190] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.008 "name": "raid_bdev1", 00:13:06.008 "uuid": "e929e8d4-2ded-4287-a012-ba2d808b33f6", 00:13:06.008 "strip_size_kb": 0, 00:13:06.008 "state": "online", 00:13:06.008 "raid_level": "raid1", 00:13:06.008 "superblock": true, 00:13:06.008 "num_base_bdevs": 4, 00:13:06.008 "num_base_bdevs_discovered": 3, 00:13:06.008 "num_base_bdevs_operational": 3, 00:13:06.008 "process": { 00:13:06.008 "type": "rebuild", 00:13:06.008 "target": "spare", 00:13:06.008 "progress": { 00:13:06.008 "blocks": 20480, 00:13:06.008 "percent": 32 00:13:06.008 } 00:13:06.008 }, 00:13:06.008 "base_bdevs_list": [ 00:13:06.008 { 00:13:06.008 "name": "spare", 00:13:06.008 "uuid": "e7ce309e-02e2-542e-b117-45fc00b9d0df", 00:13:06.008 "is_configured": true, 00:13:06.008 "data_offset": 2048, 00:13:06.008 "data_size": 63488 00:13:06.008 }, 00:13:06.008 { 00:13:06.008 "name": null, 00:13:06.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.008 "is_configured": false, 00:13:06.008 "data_offset": 2048, 00:13:06.008 "data_size": 63488 00:13:06.008 }, 00:13:06.008 { 00:13:06.008 "name": "BaseBdev3", 00:13:06.008 "uuid": "5a55f10f-4241-5d41-b45b-83020395a2f3", 00:13:06.008 "is_configured": true, 00:13:06.008 "data_offset": 2048, 00:13:06.008 "data_size": 63488 00:13:06.008 }, 00:13:06.008 { 00:13:06.008 "name": "BaseBdev4", 00:13:06.008 "uuid": "62167589-0e1b-5757-b6e5-4ee420435c43", 00:13:06.008 "is_configured": true, 00:13:06.008 "data_offset": 2048, 00:13:06.008 "data_size": 63488 00:13:06.008 } 00:13:06.008 ] 00:13:06.008 }' 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.008 [2024-11-18 03:13:09.332762] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:06.008 [2024-11-18 03:13:09.380436] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:06.008 [2024-11-18 03:13:09.380557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.008 [2024-11-18 03:13:09.380577] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:06.008 [2024-11-18 03:13:09.380584] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.008 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.009 03:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.009 03:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.009 03:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.009 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.009 "name": "raid_bdev1", 00:13:06.009 "uuid": "e929e8d4-2ded-4287-a012-ba2d808b33f6", 00:13:06.009 "strip_size_kb": 0, 00:13:06.009 "state": "online", 00:13:06.009 "raid_level": "raid1", 00:13:06.009 "superblock": true, 00:13:06.009 "num_base_bdevs": 4, 00:13:06.009 "num_base_bdevs_discovered": 2, 00:13:06.009 "num_base_bdevs_operational": 2, 00:13:06.009 "base_bdevs_list": [ 00:13:06.009 { 00:13:06.009 "name": null, 00:13:06.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.009 "is_configured": false, 00:13:06.009 "data_offset": 0, 00:13:06.009 "data_size": 63488 00:13:06.009 }, 00:13:06.009 { 00:13:06.009 "name": null, 00:13:06.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.009 "is_configured": false, 00:13:06.009 "data_offset": 2048, 00:13:06.009 "data_size": 63488 00:13:06.009 }, 00:13:06.009 { 00:13:06.009 "name": "BaseBdev3", 00:13:06.009 "uuid": "5a55f10f-4241-5d41-b45b-83020395a2f3", 00:13:06.009 "is_configured": true, 00:13:06.009 "data_offset": 2048, 00:13:06.009 "data_size": 63488 00:13:06.009 }, 00:13:06.009 { 00:13:06.009 "name": "BaseBdev4", 00:13:06.009 "uuid": "62167589-0e1b-5757-b6e5-4ee420435c43", 00:13:06.009 "is_configured": true, 00:13:06.009 "data_offset": 2048, 00:13:06.009 "data_size": 63488 00:13:06.009 } 00:13:06.009 ] 00:13:06.009 }' 00:13:06.009 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.009 03:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.269 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:06.269 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.269 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:06.269 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:06.269 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.269 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.269 03:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.269 03:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.269 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.269 03:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.269 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.269 "name": "raid_bdev1", 00:13:06.269 "uuid": "e929e8d4-2ded-4287-a012-ba2d808b33f6", 00:13:06.269 "strip_size_kb": 0, 00:13:06.269 "state": "online", 00:13:06.269 "raid_level": "raid1", 00:13:06.269 "superblock": true, 00:13:06.269 "num_base_bdevs": 4, 00:13:06.269 "num_base_bdevs_discovered": 2, 00:13:06.269 "num_base_bdevs_operational": 2, 00:13:06.269 "base_bdevs_list": [ 00:13:06.269 { 00:13:06.269 "name": null, 00:13:06.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.269 "is_configured": false, 00:13:06.269 "data_offset": 0, 00:13:06.269 "data_size": 63488 00:13:06.269 }, 00:13:06.269 { 00:13:06.269 "name": null, 00:13:06.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.269 "is_configured": false, 00:13:06.269 "data_offset": 2048, 00:13:06.269 "data_size": 63488 00:13:06.269 }, 00:13:06.269 { 00:13:06.269 "name": "BaseBdev3", 00:13:06.269 "uuid": "5a55f10f-4241-5d41-b45b-83020395a2f3", 00:13:06.269 "is_configured": true, 00:13:06.269 "data_offset": 2048, 00:13:06.269 "data_size": 63488 00:13:06.269 }, 00:13:06.269 { 00:13:06.269 "name": "BaseBdev4", 00:13:06.269 "uuid": "62167589-0e1b-5757-b6e5-4ee420435c43", 00:13:06.269 "is_configured": true, 00:13:06.269 "data_offset": 2048, 00:13:06.269 "data_size": 63488 00:13:06.269 } 00:13:06.269 ] 00:13:06.269 }' 00:13:06.269 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.269 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:06.269 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.529 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:06.529 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:06.529 03:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.529 03:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.529 03:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.529 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:06.529 03:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.529 03:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.529 [2024-11-18 03:13:09.887934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:06.529 [2024-11-18 03:13:09.888015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.529 [2024-11-18 03:13:09.888041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:13:06.529 [2024-11-18 03:13:09.888049] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.529 [2024-11-18 03:13:09.888480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.529 [2024-11-18 03:13:09.888502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:06.529 [2024-11-18 03:13:09.888578] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:06.529 [2024-11-18 03:13:09.888598] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:06.529 [2024-11-18 03:13:09.888608] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:06.529 [2024-11-18 03:13:09.888618] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:06.529 BaseBdev1 00:13:06.529 03:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.529 03:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:07.470 03:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:07.470 03:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.470 03:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.470 03:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:07.470 03:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:07.470 03:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:07.470 03:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.470 03:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.470 03:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.470 03:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.470 03:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.470 03:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.470 03:13:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.470 03:13:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.470 03:13:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.470 03:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.470 "name": "raid_bdev1", 00:13:07.470 "uuid": "e929e8d4-2ded-4287-a012-ba2d808b33f6", 00:13:07.470 "strip_size_kb": 0, 00:13:07.470 "state": "online", 00:13:07.470 "raid_level": "raid1", 00:13:07.470 "superblock": true, 00:13:07.470 "num_base_bdevs": 4, 00:13:07.470 "num_base_bdevs_discovered": 2, 00:13:07.470 "num_base_bdevs_operational": 2, 00:13:07.470 "base_bdevs_list": [ 00:13:07.470 { 00:13:07.470 "name": null, 00:13:07.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.470 "is_configured": false, 00:13:07.470 "data_offset": 0, 00:13:07.470 "data_size": 63488 00:13:07.470 }, 00:13:07.470 { 00:13:07.470 "name": null, 00:13:07.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.470 "is_configured": false, 00:13:07.470 "data_offset": 2048, 00:13:07.470 "data_size": 63488 00:13:07.470 }, 00:13:07.470 { 00:13:07.470 "name": "BaseBdev3", 00:13:07.470 "uuid": "5a55f10f-4241-5d41-b45b-83020395a2f3", 00:13:07.470 "is_configured": true, 00:13:07.470 "data_offset": 2048, 00:13:07.470 "data_size": 63488 00:13:07.470 }, 00:13:07.470 { 00:13:07.470 "name": "BaseBdev4", 00:13:07.470 "uuid": "62167589-0e1b-5757-b6e5-4ee420435c43", 00:13:07.470 "is_configured": true, 00:13:07.470 "data_offset": 2048, 00:13:07.470 "data_size": 63488 00:13:07.470 } 00:13:07.470 ] 00:13:07.470 }' 00:13:07.470 03:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.470 03:13:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.041 "name": "raid_bdev1", 00:13:08.041 "uuid": "e929e8d4-2ded-4287-a012-ba2d808b33f6", 00:13:08.041 "strip_size_kb": 0, 00:13:08.041 "state": "online", 00:13:08.041 "raid_level": "raid1", 00:13:08.041 "superblock": true, 00:13:08.041 "num_base_bdevs": 4, 00:13:08.041 "num_base_bdevs_discovered": 2, 00:13:08.041 "num_base_bdevs_operational": 2, 00:13:08.041 "base_bdevs_list": [ 00:13:08.041 { 00:13:08.041 "name": null, 00:13:08.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.041 "is_configured": false, 00:13:08.041 "data_offset": 0, 00:13:08.041 "data_size": 63488 00:13:08.041 }, 00:13:08.041 { 00:13:08.041 "name": null, 00:13:08.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.041 "is_configured": false, 00:13:08.041 "data_offset": 2048, 00:13:08.041 "data_size": 63488 00:13:08.041 }, 00:13:08.041 { 00:13:08.041 "name": "BaseBdev3", 00:13:08.041 "uuid": "5a55f10f-4241-5d41-b45b-83020395a2f3", 00:13:08.041 "is_configured": true, 00:13:08.041 "data_offset": 2048, 00:13:08.041 "data_size": 63488 00:13:08.041 }, 00:13:08.041 { 00:13:08.041 "name": "BaseBdev4", 00:13:08.041 "uuid": "62167589-0e1b-5757-b6e5-4ee420435c43", 00:13:08.041 "is_configured": true, 00:13:08.041 "data_offset": 2048, 00:13:08.041 "data_size": 63488 00:13:08.041 } 00:13:08.041 ] 00:13:08.041 }' 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.041 [2024-11-18 03:13:11.513213] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:08.041 [2024-11-18 03:13:11.513366] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:08.041 [2024-11-18 03:13:11.513380] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:08.041 request: 00:13:08.041 { 00:13:08.041 "base_bdev": "BaseBdev1", 00:13:08.041 "raid_bdev": "raid_bdev1", 00:13:08.041 "method": "bdev_raid_add_base_bdev", 00:13:08.041 "req_id": 1 00:13:08.041 } 00:13:08.041 Got JSON-RPC error response 00:13:08.041 response: 00:13:08.041 { 00:13:08.041 "code": -22, 00:13:08.041 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:08.041 } 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:08.041 03:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:08.981 03:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:08.981 03:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.981 03:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.981 03:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.981 03:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.981 03:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:08.981 03:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.981 03:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.981 03:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.981 03:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.981 03:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.981 03:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.981 03:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.981 03:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.242 03:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.242 03:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.242 "name": "raid_bdev1", 00:13:09.242 "uuid": "e929e8d4-2ded-4287-a012-ba2d808b33f6", 00:13:09.242 "strip_size_kb": 0, 00:13:09.242 "state": "online", 00:13:09.242 "raid_level": "raid1", 00:13:09.242 "superblock": true, 00:13:09.242 "num_base_bdevs": 4, 00:13:09.242 "num_base_bdevs_discovered": 2, 00:13:09.242 "num_base_bdevs_operational": 2, 00:13:09.242 "base_bdevs_list": [ 00:13:09.242 { 00:13:09.242 "name": null, 00:13:09.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.242 "is_configured": false, 00:13:09.242 "data_offset": 0, 00:13:09.242 "data_size": 63488 00:13:09.242 }, 00:13:09.242 { 00:13:09.242 "name": null, 00:13:09.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.242 "is_configured": false, 00:13:09.242 "data_offset": 2048, 00:13:09.242 "data_size": 63488 00:13:09.242 }, 00:13:09.242 { 00:13:09.242 "name": "BaseBdev3", 00:13:09.242 "uuid": "5a55f10f-4241-5d41-b45b-83020395a2f3", 00:13:09.242 "is_configured": true, 00:13:09.242 "data_offset": 2048, 00:13:09.242 "data_size": 63488 00:13:09.242 }, 00:13:09.242 { 00:13:09.242 "name": "BaseBdev4", 00:13:09.242 "uuid": "62167589-0e1b-5757-b6e5-4ee420435c43", 00:13:09.242 "is_configured": true, 00:13:09.242 "data_offset": 2048, 00:13:09.242 "data_size": 63488 00:13:09.242 } 00:13:09.242 ] 00:13:09.242 }' 00:13:09.242 03:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.242 03:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.502 03:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:09.502 03:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.502 03:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:09.502 03:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:09.502 03:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.502 03:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.502 03:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.502 03:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.502 03:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.502 03:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.502 03:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.502 "name": "raid_bdev1", 00:13:09.502 "uuid": "e929e8d4-2ded-4287-a012-ba2d808b33f6", 00:13:09.502 "strip_size_kb": 0, 00:13:09.502 "state": "online", 00:13:09.502 "raid_level": "raid1", 00:13:09.502 "superblock": true, 00:13:09.502 "num_base_bdevs": 4, 00:13:09.502 "num_base_bdevs_discovered": 2, 00:13:09.502 "num_base_bdevs_operational": 2, 00:13:09.502 "base_bdevs_list": [ 00:13:09.502 { 00:13:09.502 "name": null, 00:13:09.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.502 "is_configured": false, 00:13:09.502 "data_offset": 0, 00:13:09.502 "data_size": 63488 00:13:09.502 }, 00:13:09.502 { 00:13:09.502 "name": null, 00:13:09.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.502 "is_configured": false, 00:13:09.502 "data_offset": 2048, 00:13:09.502 "data_size": 63488 00:13:09.502 }, 00:13:09.502 { 00:13:09.502 "name": "BaseBdev3", 00:13:09.502 "uuid": "5a55f10f-4241-5d41-b45b-83020395a2f3", 00:13:09.502 "is_configured": true, 00:13:09.502 "data_offset": 2048, 00:13:09.502 "data_size": 63488 00:13:09.502 }, 00:13:09.502 { 00:13:09.502 "name": "BaseBdev4", 00:13:09.502 "uuid": "62167589-0e1b-5757-b6e5-4ee420435c43", 00:13:09.502 "is_configured": true, 00:13:09.502 "data_offset": 2048, 00:13:09.502 "data_size": 63488 00:13:09.502 } 00:13:09.502 ] 00:13:09.502 }' 00:13:09.502 03:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.763 03:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:09.763 03:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.763 03:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:09.763 03:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 88748 00:13:09.763 03:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 88748 ']' 00:13:09.763 03:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 88748 00:13:09.763 03:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:09.763 03:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:09.763 03:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88748 00:13:09.763 03:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:09.763 03:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:09.763 03:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88748' 00:13:09.763 killing process with pid 88748 00:13:09.763 Received shutdown signal, test time was about 60.000000 seconds 00:13:09.763 00:13:09.763 Latency(us) 00:13:09.763 [2024-11-18T03:13:13.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.763 [2024-11-18T03:13:13.340Z] =================================================================================================================== 00:13:09.763 [2024-11-18T03:13:13.340Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:09.763 03:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 88748 00:13:09.763 [2024-11-18 03:13:13.182280] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:09.763 [2024-11-18 03:13:13.182430] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:09.763 03:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 88748 00:13:09.763 [2024-11-18 03:13:13.182492] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:09.763 [2024-11-18 03:13:13.182503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:13:09.763 [2024-11-18 03:13:13.234591] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:10.023 00:13:10.023 real 0m23.016s 00:13:10.023 user 0m28.764s 00:13:10.023 sys 0m3.445s 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.023 ************************************ 00:13:10.023 END TEST raid_rebuild_test_sb 00:13:10.023 ************************************ 00:13:10.023 03:13:13 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:13:10.023 03:13:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:10.023 03:13:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:10.023 03:13:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:10.023 ************************************ 00:13:10.023 START TEST raid_rebuild_test_io 00:13:10.023 ************************************ 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89486 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89486 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 89486 ']' 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:10.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:10.023 03:13:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.283 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:10.284 Zero copy mechanism will not be used. 00:13:10.284 [2024-11-18 03:13:13.640800] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:10.284 [2024-11-18 03:13:13.640952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89486 ] 00:13:10.284 [2024-11-18 03:13:13.802548] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.284 [2024-11-18 03:13:13.852767] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.544 [2024-11-18 03:13:13.895283] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.544 [2024-11-18 03:13:13.895326] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.115 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:11.115 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:13:11.115 03:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:11.115 03:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:11.115 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.115 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.115 BaseBdev1_malloc 00:13:11.115 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.115 03:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:11.115 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.116 [2024-11-18 03:13:14.485893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:11.116 [2024-11-18 03:13:14.485969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.116 [2024-11-18 03:13:14.485995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:11.116 [2024-11-18 03:13:14.486009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.116 [2024-11-18 03:13:14.488126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.116 [2024-11-18 03:13:14.488160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:11.116 BaseBdev1 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.116 BaseBdev2_malloc 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.116 [2024-11-18 03:13:14.524469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:11.116 [2024-11-18 03:13:14.524528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.116 [2024-11-18 03:13:14.524553] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:11.116 [2024-11-18 03:13:14.524563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.116 [2024-11-18 03:13:14.527013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.116 [2024-11-18 03:13:14.527044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:11.116 BaseBdev2 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.116 BaseBdev3_malloc 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.116 [2024-11-18 03:13:14.553125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:11.116 [2024-11-18 03:13:14.553175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.116 [2024-11-18 03:13:14.553200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:11.116 [2024-11-18 03:13:14.553208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.116 [2024-11-18 03:13:14.555274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.116 [2024-11-18 03:13:14.555308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:11.116 BaseBdev3 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.116 BaseBdev4_malloc 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.116 [2024-11-18 03:13:14.581702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:11.116 [2024-11-18 03:13:14.581759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.116 [2024-11-18 03:13:14.581782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:11.116 [2024-11-18 03:13:14.581790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.116 [2024-11-18 03:13:14.583990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.116 [2024-11-18 03:13:14.584020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:11.116 BaseBdev4 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.116 spare_malloc 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.116 spare_delay 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.116 [2024-11-18 03:13:14.622310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:11.116 [2024-11-18 03:13:14.622362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.116 [2024-11-18 03:13:14.622384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:11.116 [2024-11-18 03:13:14.622393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.116 [2024-11-18 03:13:14.624461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.116 [2024-11-18 03:13:14.624494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:11.116 spare 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.116 [2024-11-18 03:13:14.634378] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:11.116 [2024-11-18 03:13:14.636194] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:11.116 [2024-11-18 03:13:14.636268] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:11.116 [2024-11-18 03:13:14.636311] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:11.116 [2024-11-18 03:13:14.636387] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:11.116 [2024-11-18 03:13:14.636397] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:11.116 [2024-11-18 03:13:14.636698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:11.116 [2024-11-18 03:13:14.636844] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:11.116 [2024-11-18 03:13:14.636870] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:11.116 [2024-11-18 03:13:14.637000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.116 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.377 03:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.377 "name": "raid_bdev1", 00:13:11.377 "uuid": "cd8490e5-c53a-44b1-bad2-7799e8df05a7", 00:13:11.377 "strip_size_kb": 0, 00:13:11.377 "state": "online", 00:13:11.377 "raid_level": "raid1", 00:13:11.377 "superblock": false, 00:13:11.377 "num_base_bdevs": 4, 00:13:11.377 "num_base_bdevs_discovered": 4, 00:13:11.377 "num_base_bdevs_operational": 4, 00:13:11.377 "base_bdevs_list": [ 00:13:11.377 { 00:13:11.377 "name": "BaseBdev1", 00:13:11.377 "uuid": "f87d0f6a-16db-5460-8428-b20b28ae387b", 00:13:11.377 "is_configured": true, 00:13:11.377 "data_offset": 0, 00:13:11.377 "data_size": 65536 00:13:11.377 }, 00:13:11.377 { 00:13:11.377 "name": "BaseBdev2", 00:13:11.377 "uuid": "2e0cf350-50a3-51dd-aa49-23e1f201c09b", 00:13:11.377 "is_configured": true, 00:13:11.377 "data_offset": 0, 00:13:11.377 "data_size": 65536 00:13:11.377 }, 00:13:11.377 { 00:13:11.377 "name": "BaseBdev3", 00:13:11.377 "uuid": "fd868e90-2c8d-5bbd-994b-1ca0c1b2b915", 00:13:11.377 "is_configured": true, 00:13:11.377 "data_offset": 0, 00:13:11.377 "data_size": 65536 00:13:11.377 }, 00:13:11.377 { 00:13:11.377 "name": "BaseBdev4", 00:13:11.377 "uuid": "11d54e37-462d-51b7-ba45-bc5a396dd73c", 00:13:11.377 "is_configured": true, 00:13:11.377 "data_offset": 0, 00:13:11.377 "data_size": 65536 00:13:11.377 } 00:13:11.377 ] 00:13:11.377 }' 00:13:11.377 03:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.377 03:13:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.637 03:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:11.637 03:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:11.637 03:13:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.637 03:13:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.637 [2024-11-18 03:13:15.081918] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:11.637 03:13:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.637 03:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:11.637 03:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.638 03:13:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.638 03:13:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.638 03:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:11.638 03:13:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.638 03:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:11.638 03:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:11.638 03:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:11.638 03:13:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.638 03:13:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.638 03:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:11.638 [2024-11-18 03:13:15.169421] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:11.638 03:13:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.638 03:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:11.638 03:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.638 03:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.638 03:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.638 03:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.638 03:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:11.638 03:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.638 03:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.638 03:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.638 03:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.638 03:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.638 03:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.638 03:13:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.638 03:13:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.638 03:13:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.898 03:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.898 "name": "raid_bdev1", 00:13:11.898 "uuid": "cd8490e5-c53a-44b1-bad2-7799e8df05a7", 00:13:11.898 "strip_size_kb": 0, 00:13:11.898 "state": "online", 00:13:11.898 "raid_level": "raid1", 00:13:11.898 "superblock": false, 00:13:11.898 "num_base_bdevs": 4, 00:13:11.898 "num_base_bdevs_discovered": 3, 00:13:11.898 "num_base_bdevs_operational": 3, 00:13:11.898 "base_bdevs_list": [ 00:13:11.898 { 00:13:11.898 "name": null, 00:13:11.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.898 "is_configured": false, 00:13:11.898 "data_offset": 0, 00:13:11.898 "data_size": 65536 00:13:11.898 }, 00:13:11.898 { 00:13:11.898 "name": "BaseBdev2", 00:13:11.898 "uuid": "2e0cf350-50a3-51dd-aa49-23e1f201c09b", 00:13:11.898 "is_configured": true, 00:13:11.898 "data_offset": 0, 00:13:11.898 "data_size": 65536 00:13:11.898 }, 00:13:11.898 { 00:13:11.898 "name": "BaseBdev3", 00:13:11.898 "uuid": "fd868e90-2c8d-5bbd-994b-1ca0c1b2b915", 00:13:11.898 "is_configured": true, 00:13:11.898 "data_offset": 0, 00:13:11.898 "data_size": 65536 00:13:11.898 }, 00:13:11.898 { 00:13:11.898 "name": "BaseBdev4", 00:13:11.898 "uuid": "11d54e37-462d-51b7-ba45-bc5a396dd73c", 00:13:11.898 "is_configured": true, 00:13:11.898 "data_offset": 0, 00:13:11.898 "data_size": 65536 00:13:11.898 } 00:13:11.898 ] 00:13:11.898 }' 00:13:11.898 03:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.898 03:13:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.898 [2024-11-18 03:13:15.259322] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:11.898 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:11.898 Zero copy mechanism will not be used. 00:13:11.898 Running I/O for 60 seconds... 00:13:12.158 03:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:12.158 03:13:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.158 03:13:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.159 [2024-11-18 03:13:15.596541] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:12.159 03:13:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.159 03:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:12.159 [2024-11-18 03:13:15.632667] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:12.159 [2024-11-18 03:13:15.634750] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:12.419 [2024-11-18 03:13:15.786645] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:12.679 [2024-11-18 03:13:16.026985] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:12.679 [2024-11-18 03:13:16.027278] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:12.939 225.00 IOPS, 675.00 MiB/s [2024-11-18T03:13:16.516Z] [2024-11-18 03:13:16.305337] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:12.939 [2024-11-18 03:13:16.433512] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:12.939 [2024-11-18 03:13:16.434198] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:13.200 03:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:13.200 03:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.200 03:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:13.200 03:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:13.200 03:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.200 03:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.200 03:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.200 03:13:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.200 03:13:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.200 03:13:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.200 03:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.200 "name": "raid_bdev1", 00:13:13.200 "uuid": "cd8490e5-c53a-44b1-bad2-7799e8df05a7", 00:13:13.200 "strip_size_kb": 0, 00:13:13.200 "state": "online", 00:13:13.200 "raid_level": "raid1", 00:13:13.200 "superblock": false, 00:13:13.200 "num_base_bdevs": 4, 00:13:13.200 "num_base_bdevs_discovered": 4, 00:13:13.200 "num_base_bdevs_operational": 4, 00:13:13.200 "process": { 00:13:13.200 "type": "rebuild", 00:13:13.200 "target": "spare", 00:13:13.200 "progress": { 00:13:13.200 "blocks": 10240, 00:13:13.200 "percent": 15 00:13:13.200 } 00:13:13.200 }, 00:13:13.200 "base_bdevs_list": [ 00:13:13.200 { 00:13:13.200 "name": "spare", 00:13:13.200 "uuid": "80e04e31-e0e1-5b57-94d9-2daa94ffac63", 00:13:13.200 "is_configured": true, 00:13:13.200 "data_offset": 0, 00:13:13.200 "data_size": 65536 00:13:13.200 }, 00:13:13.200 { 00:13:13.200 "name": "BaseBdev2", 00:13:13.200 "uuid": "2e0cf350-50a3-51dd-aa49-23e1f201c09b", 00:13:13.200 "is_configured": true, 00:13:13.200 "data_offset": 0, 00:13:13.200 "data_size": 65536 00:13:13.200 }, 00:13:13.200 { 00:13:13.200 "name": "BaseBdev3", 00:13:13.200 "uuid": "fd868e90-2c8d-5bbd-994b-1ca0c1b2b915", 00:13:13.200 "is_configured": true, 00:13:13.200 "data_offset": 0, 00:13:13.200 "data_size": 65536 00:13:13.200 }, 00:13:13.200 { 00:13:13.200 "name": "BaseBdev4", 00:13:13.200 "uuid": "11d54e37-462d-51b7-ba45-bc5a396dd73c", 00:13:13.200 "is_configured": true, 00:13:13.200 "data_offset": 0, 00:13:13.200 "data_size": 65536 00:13:13.200 } 00:13:13.200 ] 00:13:13.200 }' 00:13:13.200 03:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.200 03:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:13.200 03:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.200 03:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:13.200 03:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:13.200 03:13:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.460 03:13:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.460 [2024-11-18 03:13:16.778738] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:13.460 [2024-11-18 03:13:16.778805] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:13.460 [2024-11-18 03:13:16.880787] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:13.460 [2024-11-18 03:13:16.896459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.460 [2024-11-18 03:13:16.896526] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:13.460 [2024-11-18 03:13:16.896542] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:13.460 [2024-11-18 03:13:16.914265] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:13.460 03:13:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.460 03:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:13.460 03:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.460 03:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.460 03:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.460 03:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.460 03:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:13.460 03:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.460 03:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.460 03:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.460 03:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.460 03:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.460 03:13:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.460 03:13:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.460 03:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.460 03:13:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.460 03:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.460 "name": "raid_bdev1", 00:13:13.460 "uuid": "cd8490e5-c53a-44b1-bad2-7799e8df05a7", 00:13:13.460 "strip_size_kb": 0, 00:13:13.460 "state": "online", 00:13:13.460 "raid_level": "raid1", 00:13:13.460 "superblock": false, 00:13:13.460 "num_base_bdevs": 4, 00:13:13.460 "num_base_bdevs_discovered": 3, 00:13:13.460 "num_base_bdevs_operational": 3, 00:13:13.460 "base_bdevs_list": [ 00:13:13.460 { 00:13:13.460 "name": null, 00:13:13.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.460 "is_configured": false, 00:13:13.460 "data_offset": 0, 00:13:13.460 "data_size": 65536 00:13:13.460 }, 00:13:13.460 { 00:13:13.460 "name": "BaseBdev2", 00:13:13.460 "uuid": "2e0cf350-50a3-51dd-aa49-23e1f201c09b", 00:13:13.460 "is_configured": true, 00:13:13.460 "data_offset": 0, 00:13:13.460 "data_size": 65536 00:13:13.461 }, 00:13:13.461 { 00:13:13.461 "name": "BaseBdev3", 00:13:13.461 "uuid": "fd868e90-2c8d-5bbd-994b-1ca0c1b2b915", 00:13:13.461 "is_configured": true, 00:13:13.461 "data_offset": 0, 00:13:13.461 "data_size": 65536 00:13:13.461 }, 00:13:13.461 { 00:13:13.461 "name": "BaseBdev4", 00:13:13.461 "uuid": "11d54e37-462d-51b7-ba45-bc5a396dd73c", 00:13:13.461 "is_configured": true, 00:13:13.461 "data_offset": 0, 00:13:13.461 "data_size": 65536 00:13:13.461 } 00:13:13.461 ] 00:13:13.461 }' 00:13:13.461 03:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.461 03:13:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.981 163.00 IOPS, 489.00 MiB/s [2024-11-18T03:13:17.558Z] 03:13:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:13.981 03:13:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.981 03:13:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:13.981 03:13:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:13.981 03:13:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.981 03:13:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.981 03:13:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.981 03:13:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.981 03:13:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.981 03:13:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.981 03:13:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.981 "name": "raid_bdev1", 00:13:13.981 "uuid": "cd8490e5-c53a-44b1-bad2-7799e8df05a7", 00:13:13.981 "strip_size_kb": 0, 00:13:13.981 "state": "online", 00:13:13.981 "raid_level": "raid1", 00:13:13.981 "superblock": false, 00:13:13.981 "num_base_bdevs": 4, 00:13:13.981 "num_base_bdevs_discovered": 3, 00:13:13.981 "num_base_bdevs_operational": 3, 00:13:13.981 "base_bdevs_list": [ 00:13:13.981 { 00:13:13.981 "name": null, 00:13:13.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.981 "is_configured": false, 00:13:13.981 "data_offset": 0, 00:13:13.981 "data_size": 65536 00:13:13.981 }, 00:13:13.981 { 00:13:13.981 "name": "BaseBdev2", 00:13:13.981 "uuid": "2e0cf350-50a3-51dd-aa49-23e1f201c09b", 00:13:13.981 "is_configured": true, 00:13:13.981 "data_offset": 0, 00:13:13.981 "data_size": 65536 00:13:13.981 }, 00:13:13.981 { 00:13:13.981 "name": "BaseBdev3", 00:13:13.981 "uuid": "fd868e90-2c8d-5bbd-994b-1ca0c1b2b915", 00:13:13.981 "is_configured": true, 00:13:13.981 "data_offset": 0, 00:13:13.981 "data_size": 65536 00:13:13.981 }, 00:13:13.981 { 00:13:13.981 "name": "BaseBdev4", 00:13:13.981 "uuid": "11d54e37-462d-51b7-ba45-bc5a396dd73c", 00:13:13.981 "is_configured": true, 00:13:13.981 "data_offset": 0, 00:13:13.981 "data_size": 65536 00:13:13.981 } 00:13:13.981 ] 00:13:13.981 }' 00:13:13.981 03:13:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.981 03:13:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:13.981 03:13:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.981 03:13:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:13.981 03:13:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:13.981 03:13:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.982 03:13:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.982 [2024-11-18 03:13:17.499466] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:13.982 03:13:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.982 03:13:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:14.242 [2024-11-18 03:13:17.555916] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:14.242 [2024-11-18 03:13:17.558019] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:14.242 [2024-11-18 03:13:17.671579] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:14.242 [2024-11-18 03:13:17.672919] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:14.501 [2024-11-18 03:13:17.889534] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:14.501 [2024-11-18 03:13:17.889842] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:14.761 [2024-11-18 03:13:18.211992] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:14.761 165.33 IOPS, 496.00 MiB/s [2024-11-18T03:13:18.338Z] [2024-11-18 03:13:18.329201] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:14.761 [2024-11-18 03:13:18.329518] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:15.021 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:15.021 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.021 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:15.021 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:15.021 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.021 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.021 03:13:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.021 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.021 03:13:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.021 [2024-11-18 03:13:18.566369] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:15.021 03:13:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.282 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.282 "name": "raid_bdev1", 00:13:15.282 "uuid": "cd8490e5-c53a-44b1-bad2-7799e8df05a7", 00:13:15.282 "strip_size_kb": 0, 00:13:15.282 "state": "online", 00:13:15.282 "raid_level": "raid1", 00:13:15.282 "superblock": false, 00:13:15.282 "num_base_bdevs": 4, 00:13:15.282 "num_base_bdevs_discovered": 4, 00:13:15.282 "num_base_bdevs_operational": 4, 00:13:15.282 "process": { 00:13:15.282 "type": "rebuild", 00:13:15.282 "target": "spare", 00:13:15.282 "progress": { 00:13:15.282 "blocks": 12288, 00:13:15.282 "percent": 18 00:13:15.282 } 00:13:15.282 }, 00:13:15.282 "base_bdevs_list": [ 00:13:15.282 { 00:13:15.282 "name": "spare", 00:13:15.282 "uuid": "80e04e31-e0e1-5b57-94d9-2daa94ffac63", 00:13:15.282 "is_configured": true, 00:13:15.282 "data_offset": 0, 00:13:15.282 "data_size": 65536 00:13:15.282 }, 00:13:15.282 { 00:13:15.282 "name": "BaseBdev2", 00:13:15.282 "uuid": "2e0cf350-50a3-51dd-aa49-23e1f201c09b", 00:13:15.282 "is_configured": true, 00:13:15.282 "data_offset": 0, 00:13:15.282 "data_size": 65536 00:13:15.282 }, 00:13:15.282 { 00:13:15.282 "name": "BaseBdev3", 00:13:15.282 "uuid": "fd868e90-2c8d-5bbd-994b-1ca0c1b2b915", 00:13:15.282 "is_configured": true, 00:13:15.282 "data_offset": 0, 00:13:15.282 "data_size": 65536 00:13:15.282 }, 00:13:15.282 { 00:13:15.282 "name": "BaseBdev4", 00:13:15.282 "uuid": "11d54e37-462d-51b7-ba45-bc5a396dd73c", 00:13:15.282 "is_configured": true, 00:13:15.282 "data_offset": 0, 00:13:15.282 "data_size": 65536 00:13:15.282 } 00:13:15.282 ] 00:13:15.282 }' 00:13:15.282 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.282 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:15.282 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.282 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:15.282 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:15.282 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:15.282 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:15.282 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:15.282 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:15.282 03:13:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.282 03:13:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.282 [2024-11-18 03:13:18.671011] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:15.282 [2024-11-18 03:13:18.671304] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:15.282 [2024-11-18 03:13:18.689052] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:15.282 [2024-11-18 03:13:18.822079] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:13:15.282 [2024-11-18 03:13:18.822132] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:13:15.282 03:13:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.282 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:15.282 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:15.282 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:15.282 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.282 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:15.282 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:15.282 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.282 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.282 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.282 03:13:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.282 03:13:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.565 03:13:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.565 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.565 "name": "raid_bdev1", 00:13:15.565 "uuid": "cd8490e5-c53a-44b1-bad2-7799e8df05a7", 00:13:15.565 "strip_size_kb": 0, 00:13:15.565 "state": "online", 00:13:15.565 "raid_level": "raid1", 00:13:15.565 "superblock": false, 00:13:15.565 "num_base_bdevs": 4, 00:13:15.565 "num_base_bdevs_discovered": 3, 00:13:15.565 "num_base_bdevs_operational": 3, 00:13:15.565 "process": { 00:13:15.565 "type": "rebuild", 00:13:15.565 "target": "spare", 00:13:15.565 "progress": { 00:13:15.565 "blocks": 18432, 00:13:15.565 "percent": 28 00:13:15.565 } 00:13:15.565 }, 00:13:15.565 "base_bdevs_list": [ 00:13:15.565 { 00:13:15.565 "name": "spare", 00:13:15.565 "uuid": "80e04e31-e0e1-5b57-94d9-2daa94ffac63", 00:13:15.565 "is_configured": true, 00:13:15.565 "data_offset": 0, 00:13:15.565 "data_size": 65536 00:13:15.565 }, 00:13:15.565 { 00:13:15.565 "name": null, 00:13:15.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.565 "is_configured": false, 00:13:15.565 "data_offset": 0, 00:13:15.565 "data_size": 65536 00:13:15.565 }, 00:13:15.565 { 00:13:15.565 "name": "BaseBdev3", 00:13:15.565 "uuid": "fd868e90-2c8d-5bbd-994b-1ca0c1b2b915", 00:13:15.565 "is_configured": true, 00:13:15.565 "data_offset": 0, 00:13:15.565 "data_size": 65536 00:13:15.565 }, 00:13:15.565 { 00:13:15.565 "name": "BaseBdev4", 00:13:15.565 "uuid": "11d54e37-462d-51b7-ba45-bc5a396dd73c", 00:13:15.565 "is_configured": true, 00:13:15.565 "data_offset": 0, 00:13:15.565 "data_size": 65536 00:13:15.565 } 00:13:15.565 ] 00:13:15.565 }' 00:13:15.565 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.565 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:15.565 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.565 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:15.565 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=392 00:13:15.565 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:15.565 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:15.565 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.565 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:15.565 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:15.565 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.565 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.565 03:13:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.565 03:13:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.565 03:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.565 03:13:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.565 03:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.565 "name": "raid_bdev1", 00:13:15.565 "uuid": "cd8490e5-c53a-44b1-bad2-7799e8df05a7", 00:13:15.565 "strip_size_kb": 0, 00:13:15.565 "state": "online", 00:13:15.565 "raid_level": "raid1", 00:13:15.565 "superblock": false, 00:13:15.565 "num_base_bdevs": 4, 00:13:15.565 "num_base_bdevs_discovered": 3, 00:13:15.565 "num_base_bdevs_operational": 3, 00:13:15.565 "process": { 00:13:15.565 "type": "rebuild", 00:13:15.565 "target": "spare", 00:13:15.565 "progress": { 00:13:15.565 "blocks": 20480, 00:13:15.565 "percent": 31 00:13:15.565 } 00:13:15.565 }, 00:13:15.565 "base_bdevs_list": [ 00:13:15.565 { 00:13:15.565 "name": "spare", 00:13:15.565 "uuid": "80e04e31-e0e1-5b57-94d9-2daa94ffac63", 00:13:15.565 "is_configured": true, 00:13:15.565 "data_offset": 0, 00:13:15.565 "data_size": 65536 00:13:15.565 }, 00:13:15.565 { 00:13:15.565 "name": null, 00:13:15.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.565 "is_configured": false, 00:13:15.565 "data_offset": 0, 00:13:15.565 "data_size": 65536 00:13:15.565 }, 00:13:15.565 { 00:13:15.565 "name": "BaseBdev3", 00:13:15.565 "uuid": "fd868e90-2c8d-5bbd-994b-1ca0c1b2b915", 00:13:15.565 "is_configured": true, 00:13:15.565 "data_offset": 0, 00:13:15.565 "data_size": 65536 00:13:15.565 }, 00:13:15.565 { 00:13:15.565 "name": "BaseBdev4", 00:13:15.565 "uuid": "11d54e37-462d-51b7-ba45-bc5a396dd73c", 00:13:15.565 "is_configured": true, 00:13:15.565 "data_offset": 0, 00:13:15.565 "data_size": 65536 00:13:15.565 } 00:13:15.565 ] 00:13:15.565 }' 00:13:15.565 03:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.566 [2024-11-18 03:13:19.067273] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:15.566 03:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:15.566 03:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.566 03:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:15.566 03:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:15.867 144.50 IOPS, 433.50 MiB/s [2024-11-18T03:13:19.444Z] [2024-11-18 03:13:19.305355] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:16.436 [2024-11-18 03:13:19.816512] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:16.696 [2024-11-18 03:13:20.053087] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:16.696 03:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:16.696 03:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:16.696 03:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.696 03:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:16.696 03:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:16.696 03:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.696 03:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.696 03:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.696 03:13:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.696 03:13:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.697 03:13:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.697 03:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.697 "name": "raid_bdev1", 00:13:16.697 "uuid": "cd8490e5-c53a-44b1-bad2-7799e8df05a7", 00:13:16.697 "strip_size_kb": 0, 00:13:16.697 "state": "online", 00:13:16.697 "raid_level": "raid1", 00:13:16.697 "superblock": false, 00:13:16.697 "num_base_bdevs": 4, 00:13:16.697 "num_base_bdevs_discovered": 3, 00:13:16.697 "num_base_bdevs_operational": 3, 00:13:16.697 "process": { 00:13:16.697 "type": "rebuild", 00:13:16.697 "target": "spare", 00:13:16.697 "progress": { 00:13:16.697 "blocks": 38912, 00:13:16.697 "percent": 59 00:13:16.697 } 00:13:16.697 }, 00:13:16.697 "base_bdevs_list": [ 00:13:16.697 { 00:13:16.697 "name": "spare", 00:13:16.697 "uuid": "80e04e31-e0e1-5b57-94d9-2daa94ffac63", 00:13:16.697 "is_configured": true, 00:13:16.697 "data_offset": 0, 00:13:16.697 "data_size": 65536 00:13:16.697 }, 00:13:16.697 { 00:13:16.697 "name": null, 00:13:16.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.697 "is_configured": false, 00:13:16.697 "data_offset": 0, 00:13:16.697 "data_size": 65536 00:13:16.697 }, 00:13:16.697 { 00:13:16.697 "name": "BaseBdev3", 00:13:16.697 "uuid": "fd868e90-2c8d-5bbd-994b-1ca0c1b2b915", 00:13:16.697 "is_configured": true, 00:13:16.697 "data_offset": 0, 00:13:16.697 "data_size": 65536 00:13:16.697 }, 00:13:16.697 { 00:13:16.697 "name": "BaseBdev4", 00:13:16.697 "uuid": "11d54e37-462d-51b7-ba45-bc5a396dd73c", 00:13:16.697 "is_configured": true, 00:13:16.697 "data_offset": 0, 00:13:16.697 "data_size": 65536 00:13:16.697 } 00:13:16.697 ] 00:13:16.697 }' 00:13:16.697 03:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.697 03:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:16.697 03:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.697 03:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:16.697 03:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:16.697 126.80 IOPS, 380.40 MiB/s [2024-11-18T03:13:20.274Z] [2024-11-18 03:13:20.268587] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:16.697 [2024-11-18 03:13:20.268830] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:16.957 [2024-11-18 03:13:20.519454] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:17.217 [2024-11-18 03:13:20.742669] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:17.787 03:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:17.787 03:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:17.787 03:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.787 03:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:17.787 03:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:17.787 03:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.787 111.50 IOPS, 334.50 MiB/s [2024-11-18T03:13:21.364Z] 03:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.787 03:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.787 03:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.787 03:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.787 03:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.787 03:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.787 "name": "raid_bdev1", 00:13:17.787 "uuid": "cd8490e5-c53a-44b1-bad2-7799e8df05a7", 00:13:17.787 "strip_size_kb": 0, 00:13:17.787 "state": "online", 00:13:17.787 "raid_level": "raid1", 00:13:17.787 "superblock": false, 00:13:17.787 "num_base_bdevs": 4, 00:13:17.787 "num_base_bdevs_discovered": 3, 00:13:17.787 "num_base_bdevs_operational": 3, 00:13:17.787 "process": { 00:13:17.787 "type": "rebuild", 00:13:17.787 "target": "spare", 00:13:17.787 "progress": { 00:13:17.787 "blocks": 55296, 00:13:17.787 "percent": 84 00:13:17.787 } 00:13:17.787 }, 00:13:17.787 "base_bdevs_list": [ 00:13:17.787 { 00:13:17.787 "name": "spare", 00:13:17.787 "uuid": "80e04e31-e0e1-5b57-94d9-2daa94ffac63", 00:13:17.787 "is_configured": true, 00:13:17.787 "data_offset": 0, 00:13:17.787 "data_size": 65536 00:13:17.787 }, 00:13:17.787 { 00:13:17.787 "name": null, 00:13:17.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.787 "is_configured": false, 00:13:17.787 "data_offset": 0, 00:13:17.787 "data_size": 65536 00:13:17.787 }, 00:13:17.787 { 00:13:17.787 "name": "BaseBdev3", 00:13:17.787 "uuid": "fd868e90-2c8d-5bbd-994b-1ca0c1b2b915", 00:13:17.787 "is_configured": true, 00:13:17.787 "data_offset": 0, 00:13:17.787 "data_size": 65536 00:13:17.787 }, 00:13:17.787 { 00:13:17.787 "name": "BaseBdev4", 00:13:17.787 "uuid": "11d54e37-462d-51b7-ba45-bc5a396dd73c", 00:13:17.787 "is_configured": true, 00:13:17.787 "data_offset": 0, 00:13:17.787 "data_size": 65536 00:13:17.787 } 00:13:17.787 ] 00:13:17.787 }' 00:13:17.787 03:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.787 03:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:18.047 03:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.047 03:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:18.047 03:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:18.306 [2024-11-18 03:13:21.716568] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:18.306 [2024-11-18 03:13:21.816338] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:18.306 [2024-11-18 03:13:21.818155] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.875 100.86 IOPS, 302.57 MiB/s [2024-11-18T03:13:22.452Z] 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:18.876 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:18.876 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.876 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:18.876 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:18.876 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.876 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.876 03:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.876 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.876 03:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.876 03:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.136 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.136 "name": "raid_bdev1", 00:13:19.136 "uuid": "cd8490e5-c53a-44b1-bad2-7799e8df05a7", 00:13:19.136 "strip_size_kb": 0, 00:13:19.136 "state": "online", 00:13:19.136 "raid_level": "raid1", 00:13:19.136 "superblock": false, 00:13:19.136 "num_base_bdevs": 4, 00:13:19.136 "num_base_bdevs_discovered": 3, 00:13:19.136 "num_base_bdevs_operational": 3, 00:13:19.136 "base_bdevs_list": [ 00:13:19.136 { 00:13:19.136 "name": "spare", 00:13:19.136 "uuid": "80e04e31-e0e1-5b57-94d9-2daa94ffac63", 00:13:19.136 "is_configured": true, 00:13:19.136 "data_offset": 0, 00:13:19.136 "data_size": 65536 00:13:19.136 }, 00:13:19.136 { 00:13:19.136 "name": null, 00:13:19.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.136 "is_configured": false, 00:13:19.136 "data_offset": 0, 00:13:19.136 "data_size": 65536 00:13:19.136 }, 00:13:19.136 { 00:13:19.136 "name": "BaseBdev3", 00:13:19.136 "uuid": "fd868e90-2c8d-5bbd-994b-1ca0c1b2b915", 00:13:19.136 "is_configured": true, 00:13:19.136 "data_offset": 0, 00:13:19.136 "data_size": 65536 00:13:19.136 }, 00:13:19.136 { 00:13:19.136 "name": "BaseBdev4", 00:13:19.136 "uuid": "11d54e37-462d-51b7-ba45-bc5a396dd73c", 00:13:19.136 "is_configured": true, 00:13:19.136 "data_offset": 0, 00:13:19.136 "data_size": 65536 00:13:19.136 } 00:13:19.136 ] 00:13:19.136 }' 00:13:19.136 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.136 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:19.136 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.136 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:19.136 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:19.136 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:19.136 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.136 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:19.136 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:19.136 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.136 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.136 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.136 03:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.136 03:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.136 03:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.136 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.136 "name": "raid_bdev1", 00:13:19.136 "uuid": "cd8490e5-c53a-44b1-bad2-7799e8df05a7", 00:13:19.136 "strip_size_kb": 0, 00:13:19.136 "state": "online", 00:13:19.136 "raid_level": "raid1", 00:13:19.136 "superblock": false, 00:13:19.136 "num_base_bdevs": 4, 00:13:19.136 "num_base_bdevs_discovered": 3, 00:13:19.137 "num_base_bdevs_operational": 3, 00:13:19.137 "base_bdevs_list": [ 00:13:19.137 { 00:13:19.137 "name": "spare", 00:13:19.137 "uuid": "80e04e31-e0e1-5b57-94d9-2daa94ffac63", 00:13:19.137 "is_configured": true, 00:13:19.137 "data_offset": 0, 00:13:19.137 "data_size": 65536 00:13:19.137 }, 00:13:19.137 { 00:13:19.137 "name": null, 00:13:19.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.137 "is_configured": false, 00:13:19.137 "data_offset": 0, 00:13:19.137 "data_size": 65536 00:13:19.137 }, 00:13:19.137 { 00:13:19.137 "name": "BaseBdev3", 00:13:19.137 "uuid": "fd868e90-2c8d-5bbd-994b-1ca0c1b2b915", 00:13:19.137 "is_configured": true, 00:13:19.137 "data_offset": 0, 00:13:19.137 "data_size": 65536 00:13:19.137 }, 00:13:19.137 { 00:13:19.137 "name": "BaseBdev4", 00:13:19.137 "uuid": "11d54e37-462d-51b7-ba45-bc5a396dd73c", 00:13:19.137 "is_configured": true, 00:13:19.137 "data_offset": 0, 00:13:19.137 "data_size": 65536 00:13:19.137 } 00:13:19.137 ] 00:13:19.137 }' 00:13:19.137 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.137 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:19.137 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.137 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:19.137 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:19.137 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.137 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.137 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.137 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.137 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:19.137 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.137 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.137 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.137 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.137 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.137 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.137 03:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.137 03:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.397 03:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.397 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.397 "name": "raid_bdev1", 00:13:19.397 "uuid": "cd8490e5-c53a-44b1-bad2-7799e8df05a7", 00:13:19.397 "strip_size_kb": 0, 00:13:19.397 "state": "online", 00:13:19.397 "raid_level": "raid1", 00:13:19.397 "superblock": false, 00:13:19.397 "num_base_bdevs": 4, 00:13:19.397 "num_base_bdevs_discovered": 3, 00:13:19.397 "num_base_bdevs_operational": 3, 00:13:19.397 "base_bdevs_list": [ 00:13:19.397 { 00:13:19.397 "name": "spare", 00:13:19.397 "uuid": "80e04e31-e0e1-5b57-94d9-2daa94ffac63", 00:13:19.397 "is_configured": true, 00:13:19.397 "data_offset": 0, 00:13:19.397 "data_size": 65536 00:13:19.397 }, 00:13:19.397 { 00:13:19.397 "name": null, 00:13:19.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.397 "is_configured": false, 00:13:19.397 "data_offset": 0, 00:13:19.397 "data_size": 65536 00:13:19.397 }, 00:13:19.397 { 00:13:19.397 "name": "BaseBdev3", 00:13:19.397 "uuid": "fd868e90-2c8d-5bbd-994b-1ca0c1b2b915", 00:13:19.397 "is_configured": true, 00:13:19.397 "data_offset": 0, 00:13:19.397 "data_size": 65536 00:13:19.397 }, 00:13:19.397 { 00:13:19.397 "name": "BaseBdev4", 00:13:19.397 "uuid": "11d54e37-462d-51b7-ba45-bc5a396dd73c", 00:13:19.397 "is_configured": true, 00:13:19.397 "data_offset": 0, 00:13:19.397 "data_size": 65536 00:13:19.397 } 00:13:19.397 ] 00:13:19.397 }' 00:13:19.397 03:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.397 03:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.658 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:19.658 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.658 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.658 [2024-11-18 03:13:23.097214] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:19.658 [2024-11-18 03:13:23.097261] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:19.658 00:13:19.658 Latency(us) 00:13:19.658 [2024-11-18T03:13:23.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.658 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:19.658 raid_bdev1 : 7.88 94.18 282.53 0.00 0.00 14552.88 307.65 118136.51 00:13:19.658 [2024-11-18T03:13:23.235Z] =================================================================================================================== 00:13:19.658 [2024-11-18T03:13:23.235Z] Total : 94.18 282.53 0.00 0.00 14552.88 307.65 118136.51 00:13:19.658 [2024-11-18 03:13:23.128707] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.658 [2024-11-18 03:13:23.128756] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:19.658 [2024-11-18 03:13:23.128865] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:19.658 [2024-11-18 03:13:23.128886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:19.658 { 00:13:19.658 "results": [ 00:13:19.658 { 00:13:19.658 "job": "raid_bdev1", 00:13:19.658 "core_mask": "0x1", 00:13:19.658 "workload": "randrw", 00:13:19.658 "percentage": 50, 00:13:19.658 "status": "finished", 00:13:19.658 "queue_depth": 2, 00:13:19.658 "io_size": 3145728, 00:13:19.658 "runtime": 7.878791, 00:13:19.658 "iops": 94.17688576838756, 00:13:19.658 "mibps": 282.53065730516266, 00:13:19.658 "io_failed": 0, 00:13:19.658 "io_timeout": 0, 00:13:19.658 "avg_latency_us": 14552.88421003072, 00:13:19.658 "min_latency_us": 307.6471615720524, 00:13:19.658 "max_latency_us": 118136.51004366812 00:13:19.658 } 00:13:19.658 ], 00:13:19.658 "core_count": 1 00:13:19.658 } 00:13:19.658 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.658 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.658 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:19.658 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.658 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.658 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.658 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:19.658 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:19.658 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:19.658 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:19.658 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:19.658 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:19.658 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:19.658 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:19.658 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:19.658 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:19.658 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:19.658 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:19.658 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:19.918 /dev/nbd0 00:13:19.918 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:19.918 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:19.918 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:19.918 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:19.918 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:19.918 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:19.918 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:19.918 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:19.918 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:19.918 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:19.918 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:19.918 1+0 records in 00:13:19.918 1+0 records out 00:13:19.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389992 s, 10.5 MB/s 00:13:19.918 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:19.918 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:19.918 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:19.918 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:19.918 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:19.918 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:19.918 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:19.919 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:19.919 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:19.919 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:19.919 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:19.919 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:19.919 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:19.919 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:19.919 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:19.919 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:19.919 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:19.919 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:19.919 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:19.919 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:19.919 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:19.919 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:20.179 /dev/nbd1 00:13:20.179 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:20.179 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:20.179 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:20.179 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:20.179 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:20.179 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:20.179 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:20.179 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:20.179 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:20.179 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:20.179 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:20.179 1+0 records in 00:13:20.179 1+0 records out 00:13:20.179 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429166 s, 9.5 MB/s 00:13:20.179 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.179 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:20.179 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.179 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:20.179 03:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:20.179 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:20.179 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:20.179 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:20.439 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:20.439 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:20.439 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:20.439 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:20.439 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:20.439 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:20.439 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:20.439 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:20.439 03:13:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:20.439 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:20.439 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:20.439 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:20.439 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:20.439 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:20.439 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:20.439 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:20.439 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:20.439 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:20.439 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:20.439 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:20.439 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:20.439 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:20.439 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:20.439 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:20.439 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:20.439 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:20.439 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:20.700 /dev/nbd1 00:13:20.700 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:20.700 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:20.700 03:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:20.700 03:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:20.700 03:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:20.700 03:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:20.700 03:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:20.700 03:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:20.700 03:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:20.700 03:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:20.700 03:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:20.700 1+0 records in 00:13:20.700 1+0 records out 00:13:20.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039365 s, 10.4 MB/s 00:13:20.700 03:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.700 03:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:20.700 03:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.700 03:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:20.700 03:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:20.700 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:20.700 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:20.700 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:20.960 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:20.960 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:20.960 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:20.960 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:20.960 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:20.960 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:20.960 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:20.960 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:20.960 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:20.960 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:20.960 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:20.960 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:20.960 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:20.960 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:20.960 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:20.960 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:20.960 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:20.960 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:20.960 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:20.960 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:20.960 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:20.960 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:21.220 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:21.220 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:21.220 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:21.220 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:21.220 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:21.220 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:21.220 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:21.220 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:21.221 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:21.221 03:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 89486 00:13:21.221 03:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 89486 ']' 00:13:21.221 03:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 89486 00:13:21.221 03:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:13:21.221 03:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:21.221 03:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89486 00:13:21.221 03:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:21.221 03:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:21.221 killing process with pid 89486 00:13:21.221 03:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89486' 00:13:21.221 03:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 89486 00:13:21.221 Received shutdown signal, test time was about 9.482077 seconds 00:13:21.221 00:13:21.221 Latency(us) 00:13:21.221 [2024-11-18T03:13:24.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:21.221 [2024-11-18T03:13:24.798Z] =================================================================================================================== 00:13:21.221 [2024-11-18T03:13:24.798Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:21.221 [2024-11-18 03:13:24.725281] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:21.221 03:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 89486 00:13:21.221 [2024-11-18 03:13:24.771618] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:21.481 03:13:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:21.481 00:13:21.481 real 0m11.461s 00:13:21.481 user 0m14.907s 00:13:21.481 sys 0m1.688s 00:13:21.481 03:13:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:21.481 03:13:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.481 ************************************ 00:13:21.481 END TEST raid_rebuild_test_io 00:13:21.481 ************************************ 00:13:21.741 03:13:25 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:13:21.741 03:13:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:21.741 03:13:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:21.741 03:13:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:21.741 ************************************ 00:13:21.741 START TEST raid_rebuild_test_sb_io 00:13:21.741 ************************************ 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89878 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89878 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 89878 ']' 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:21.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.741 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:21.742 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.742 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:21.742 Zero copy mechanism will not be used. 00:13:21.742 [2024-11-18 03:13:25.169163] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:21.742 [2024-11-18 03:13:25.169309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89878 ] 00:13:21.742 [2024-11-18 03:13:25.310731] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.001 [2024-11-18 03:13:25.359344] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.001 [2024-11-18 03:13:25.401475] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:22.001 [2024-11-18 03:13:25.401521] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:22.572 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:22.572 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:13:22.572 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:22.572 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:22.572 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.572 03:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.572 BaseBdev1_malloc 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.572 [2024-11-18 03:13:26.011524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:22.572 [2024-11-18 03:13:26.011595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.572 [2024-11-18 03:13:26.011619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:22.572 [2024-11-18 03:13:26.011633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.572 [2024-11-18 03:13:26.013702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.572 [2024-11-18 03:13:26.013743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:22.572 BaseBdev1 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.572 BaseBdev2_malloc 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.572 [2024-11-18 03:13:26.051477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:22.572 [2024-11-18 03:13:26.051556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.572 [2024-11-18 03:13:26.051585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:22.572 [2024-11-18 03:13:26.051598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.572 [2024-11-18 03:13:26.054604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.572 [2024-11-18 03:13:26.054669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:22.572 BaseBdev2 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.572 BaseBdev3_malloc 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.572 [2024-11-18 03:13:26.080238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:22.572 [2024-11-18 03:13:26.080294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.572 [2024-11-18 03:13:26.080316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:22.572 [2024-11-18 03:13:26.080325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.572 [2024-11-18 03:13:26.082382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.572 [2024-11-18 03:13:26.082417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:22.572 BaseBdev3 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.572 BaseBdev4_malloc 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.572 [2024-11-18 03:13:26.108911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:22.572 [2024-11-18 03:13:26.108982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.572 [2024-11-18 03:13:26.109010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:22.572 [2024-11-18 03:13:26.109018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.572 [2024-11-18 03:13:26.111074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.572 [2024-11-18 03:13:26.111113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:22.572 BaseBdev4 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.572 spare_malloc 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.572 spare_delay 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.572 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.833 [2024-11-18 03:13:26.149599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:22.833 [2024-11-18 03:13:26.149655] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.833 [2024-11-18 03:13:26.149677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:22.833 [2024-11-18 03:13:26.149686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.833 [2024-11-18 03:13:26.151785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.833 [2024-11-18 03:13:26.151824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:22.833 spare 00:13:22.833 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.833 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:22.833 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.833 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.833 [2024-11-18 03:13:26.161660] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:22.833 [2024-11-18 03:13:26.163482] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:22.833 [2024-11-18 03:13:26.163558] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:22.833 [2024-11-18 03:13:26.163604] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:22.833 [2024-11-18 03:13:26.163768] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:22.833 [2024-11-18 03:13:26.163784] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:22.833 [2024-11-18 03:13:26.164039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:22.833 [2024-11-18 03:13:26.164193] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:22.833 [2024-11-18 03:13:26.164218] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:22.833 [2024-11-18 03:13:26.164348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.833 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.833 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:22.833 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.833 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.833 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.833 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.833 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:22.833 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.833 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.833 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.833 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.833 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.833 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.833 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.833 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.833 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.833 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.833 "name": "raid_bdev1", 00:13:22.833 "uuid": "d64adbaf-55f8-405f-b98a-402b1e698e36", 00:13:22.833 "strip_size_kb": 0, 00:13:22.833 "state": "online", 00:13:22.833 "raid_level": "raid1", 00:13:22.833 "superblock": true, 00:13:22.833 "num_base_bdevs": 4, 00:13:22.833 "num_base_bdevs_discovered": 4, 00:13:22.833 "num_base_bdevs_operational": 4, 00:13:22.833 "base_bdevs_list": [ 00:13:22.833 { 00:13:22.833 "name": "BaseBdev1", 00:13:22.833 "uuid": "3e0babc7-e5ca-50c9-9f8b-69098a9066cd", 00:13:22.833 "is_configured": true, 00:13:22.833 "data_offset": 2048, 00:13:22.833 "data_size": 63488 00:13:22.833 }, 00:13:22.833 { 00:13:22.833 "name": "BaseBdev2", 00:13:22.833 "uuid": "2d91eccf-e123-501d-9b3a-b6d8ee8bd8d7", 00:13:22.833 "is_configured": true, 00:13:22.833 "data_offset": 2048, 00:13:22.833 "data_size": 63488 00:13:22.833 }, 00:13:22.834 { 00:13:22.834 "name": "BaseBdev3", 00:13:22.834 "uuid": "38036bf2-d4f8-59ff-834e-c9ac4d669194", 00:13:22.834 "is_configured": true, 00:13:22.834 "data_offset": 2048, 00:13:22.834 "data_size": 63488 00:13:22.834 }, 00:13:22.834 { 00:13:22.834 "name": "BaseBdev4", 00:13:22.834 "uuid": "8aadc2f4-6a34-5250-b4a1-99ae1ecba68d", 00:13:22.834 "is_configured": true, 00:13:22.834 "data_offset": 2048, 00:13:22.834 "data_size": 63488 00:13:22.834 } 00:13:22.834 ] 00:13:22.834 }' 00:13:22.834 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.834 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.096 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:23.096 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:23.096 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.096 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.096 [2024-11-18 03:13:26.645144] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.356 [2024-11-18 03:13:26.720665] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.356 "name": "raid_bdev1", 00:13:23.356 "uuid": "d64adbaf-55f8-405f-b98a-402b1e698e36", 00:13:23.356 "strip_size_kb": 0, 00:13:23.356 "state": "online", 00:13:23.356 "raid_level": "raid1", 00:13:23.356 "superblock": true, 00:13:23.356 "num_base_bdevs": 4, 00:13:23.356 "num_base_bdevs_discovered": 3, 00:13:23.356 "num_base_bdevs_operational": 3, 00:13:23.356 "base_bdevs_list": [ 00:13:23.356 { 00:13:23.356 "name": null, 00:13:23.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.356 "is_configured": false, 00:13:23.356 "data_offset": 0, 00:13:23.356 "data_size": 63488 00:13:23.356 }, 00:13:23.356 { 00:13:23.356 "name": "BaseBdev2", 00:13:23.356 "uuid": "2d91eccf-e123-501d-9b3a-b6d8ee8bd8d7", 00:13:23.356 "is_configured": true, 00:13:23.356 "data_offset": 2048, 00:13:23.356 "data_size": 63488 00:13:23.356 }, 00:13:23.356 { 00:13:23.356 "name": "BaseBdev3", 00:13:23.356 "uuid": "38036bf2-d4f8-59ff-834e-c9ac4d669194", 00:13:23.356 "is_configured": true, 00:13:23.356 "data_offset": 2048, 00:13:23.356 "data_size": 63488 00:13:23.356 }, 00:13:23.356 { 00:13:23.356 "name": "BaseBdev4", 00:13:23.356 "uuid": "8aadc2f4-6a34-5250-b4a1-99ae1ecba68d", 00:13:23.356 "is_configured": true, 00:13:23.356 "data_offset": 2048, 00:13:23.356 "data_size": 63488 00:13:23.356 } 00:13:23.356 ] 00:13:23.356 }' 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.356 03:13:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.356 [2024-11-18 03:13:26.810522] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:23.356 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:23.356 Zero copy mechanism will not be used. 00:13:23.356 Running I/O for 60 seconds... 00:13:23.616 03:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:23.616 03:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.616 03:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.616 [2024-11-18 03:13:27.126796] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:23.616 03:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.616 03:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:23.616 [2024-11-18 03:13:27.177413] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:23.616 [2024-11-18 03:13:27.179448] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:23.875 [2024-11-18 03:13:27.293181] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:23.875 [2024-11-18 03:13:27.293694] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:24.135 [2024-11-18 03:13:27.510830] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:24.135 [2024-11-18 03:13:27.511481] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:24.395 145.00 IOPS, 435.00 MiB/s [2024-11-18T03:13:27.972Z] [2024-11-18 03:13:27.867930] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:24.655 [2024-11-18 03:13:28.079284] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:24.655 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:24.655 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.655 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:24.655 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:24.655 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.655 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.655 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.655 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.655 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.655 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.655 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.655 "name": "raid_bdev1", 00:13:24.655 "uuid": "d64adbaf-55f8-405f-b98a-402b1e698e36", 00:13:24.655 "strip_size_kb": 0, 00:13:24.655 "state": "online", 00:13:24.655 "raid_level": "raid1", 00:13:24.655 "superblock": true, 00:13:24.655 "num_base_bdevs": 4, 00:13:24.655 "num_base_bdevs_discovered": 4, 00:13:24.655 "num_base_bdevs_operational": 4, 00:13:24.655 "process": { 00:13:24.655 "type": "rebuild", 00:13:24.655 "target": "spare", 00:13:24.655 "progress": { 00:13:24.655 "blocks": 10240, 00:13:24.655 "percent": 16 00:13:24.655 } 00:13:24.655 }, 00:13:24.655 "base_bdevs_list": [ 00:13:24.655 { 00:13:24.655 "name": "spare", 00:13:24.655 "uuid": "20e619fc-e6b4-5daa-b1a4-7164771b6f7f", 00:13:24.655 "is_configured": true, 00:13:24.656 "data_offset": 2048, 00:13:24.656 "data_size": 63488 00:13:24.656 }, 00:13:24.656 { 00:13:24.656 "name": "BaseBdev2", 00:13:24.656 "uuid": "2d91eccf-e123-501d-9b3a-b6d8ee8bd8d7", 00:13:24.656 "is_configured": true, 00:13:24.656 "data_offset": 2048, 00:13:24.656 "data_size": 63488 00:13:24.656 }, 00:13:24.656 { 00:13:24.656 "name": "BaseBdev3", 00:13:24.656 "uuid": "38036bf2-d4f8-59ff-834e-c9ac4d669194", 00:13:24.656 "is_configured": true, 00:13:24.656 "data_offset": 2048, 00:13:24.656 "data_size": 63488 00:13:24.656 }, 00:13:24.656 { 00:13:24.656 "name": "BaseBdev4", 00:13:24.656 "uuid": "8aadc2f4-6a34-5250-b4a1-99ae1ecba68d", 00:13:24.656 "is_configured": true, 00:13:24.656 "data_offset": 2048, 00:13:24.656 "data_size": 63488 00:13:24.656 } 00:13:24.656 ] 00:13:24.656 }' 00:13:24.656 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.916 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:24.916 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.916 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.916 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:24.916 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.916 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.916 [2024-11-18 03:13:28.319409] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:24.916 [2024-11-18 03:13:28.432509] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:24.916 [2024-11-18 03:13:28.442095] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.916 [2024-11-18 03:13:28.442162] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:24.916 [2024-11-18 03:13:28.442175] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:24.916 [2024-11-18 03:13:28.466509] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:24.916 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.916 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:24.916 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.916 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.916 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.916 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.916 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.916 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.916 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.916 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.916 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.916 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.916 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.916 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.916 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.176 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.176 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.176 "name": "raid_bdev1", 00:13:25.176 "uuid": "d64adbaf-55f8-405f-b98a-402b1e698e36", 00:13:25.176 "strip_size_kb": 0, 00:13:25.176 "state": "online", 00:13:25.176 "raid_level": "raid1", 00:13:25.176 "superblock": true, 00:13:25.176 "num_base_bdevs": 4, 00:13:25.176 "num_base_bdevs_discovered": 3, 00:13:25.176 "num_base_bdevs_operational": 3, 00:13:25.176 "base_bdevs_list": [ 00:13:25.176 { 00:13:25.176 "name": null, 00:13:25.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.176 "is_configured": false, 00:13:25.176 "data_offset": 0, 00:13:25.176 "data_size": 63488 00:13:25.176 }, 00:13:25.176 { 00:13:25.176 "name": "BaseBdev2", 00:13:25.176 "uuid": "2d91eccf-e123-501d-9b3a-b6d8ee8bd8d7", 00:13:25.176 "is_configured": true, 00:13:25.176 "data_offset": 2048, 00:13:25.176 "data_size": 63488 00:13:25.176 }, 00:13:25.176 { 00:13:25.176 "name": "BaseBdev3", 00:13:25.176 "uuid": "38036bf2-d4f8-59ff-834e-c9ac4d669194", 00:13:25.176 "is_configured": true, 00:13:25.176 "data_offset": 2048, 00:13:25.176 "data_size": 63488 00:13:25.176 }, 00:13:25.176 { 00:13:25.176 "name": "BaseBdev4", 00:13:25.176 "uuid": "8aadc2f4-6a34-5250-b4a1-99ae1ecba68d", 00:13:25.176 "is_configured": true, 00:13:25.176 "data_offset": 2048, 00:13:25.176 "data_size": 63488 00:13:25.176 } 00:13:25.176 ] 00:13:25.176 }' 00:13:25.176 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.176 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.436 155.00 IOPS, 465.00 MiB/s [2024-11-18T03:13:29.013Z] 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:25.436 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.436 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:25.436 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:25.436 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.436 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.436 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.436 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.436 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.436 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.436 03:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.436 "name": "raid_bdev1", 00:13:25.436 "uuid": "d64adbaf-55f8-405f-b98a-402b1e698e36", 00:13:25.436 "strip_size_kb": 0, 00:13:25.436 "state": "online", 00:13:25.436 "raid_level": "raid1", 00:13:25.436 "superblock": true, 00:13:25.436 "num_base_bdevs": 4, 00:13:25.436 "num_base_bdevs_discovered": 3, 00:13:25.436 "num_base_bdevs_operational": 3, 00:13:25.437 "base_bdevs_list": [ 00:13:25.437 { 00:13:25.437 "name": null, 00:13:25.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.437 "is_configured": false, 00:13:25.437 "data_offset": 0, 00:13:25.437 "data_size": 63488 00:13:25.437 }, 00:13:25.437 { 00:13:25.437 "name": "BaseBdev2", 00:13:25.437 "uuid": "2d91eccf-e123-501d-9b3a-b6d8ee8bd8d7", 00:13:25.437 "is_configured": true, 00:13:25.437 "data_offset": 2048, 00:13:25.437 "data_size": 63488 00:13:25.437 }, 00:13:25.437 { 00:13:25.437 "name": "BaseBdev3", 00:13:25.437 "uuid": "38036bf2-d4f8-59ff-834e-c9ac4d669194", 00:13:25.437 "is_configured": true, 00:13:25.437 "data_offset": 2048, 00:13:25.437 "data_size": 63488 00:13:25.437 }, 00:13:25.437 { 00:13:25.437 "name": "BaseBdev4", 00:13:25.437 "uuid": "8aadc2f4-6a34-5250-b4a1-99ae1ecba68d", 00:13:25.437 "is_configured": true, 00:13:25.437 "data_offset": 2048, 00:13:25.437 "data_size": 63488 00:13:25.437 } 00:13:25.437 ] 00:13:25.437 }' 00:13:25.437 03:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.697 03:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:25.697 03:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.697 03:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:25.697 03:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:25.697 03:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.697 03:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.697 [2024-11-18 03:13:29.109696] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:25.697 03:13:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.697 03:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:25.697 [2024-11-18 03:13:29.158711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:25.697 [2024-11-18 03:13:29.160718] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:25.956 [2024-11-18 03:13:29.282789] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:25.956 [2024-11-18 03:13:29.283192] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:25.956 [2024-11-18 03:13:29.399163] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:25.956 [2024-11-18 03:13:29.399458] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:26.215 [2024-11-18 03:13:29.786728] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:26.475 155.00 IOPS, 465.00 MiB/s [2024-11-18T03:13:30.052Z] [2024-11-18 03:13:30.027063] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:26.736 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.736 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.736 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.736 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.736 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.736 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.736 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.736 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.736 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.736 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.736 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.736 "name": "raid_bdev1", 00:13:26.736 "uuid": "d64adbaf-55f8-405f-b98a-402b1e698e36", 00:13:26.736 "strip_size_kb": 0, 00:13:26.736 "state": "online", 00:13:26.736 "raid_level": "raid1", 00:13:26.736 "superblock": true, 00:13:26.736 "num_base_bdevs": 4, 00:13:26.736 "num_base_bdevs_discovered": 4, 00:13:26.736 "num_base_bdevs_operational": 4, 00:13:26.736 "process": { 00:13:26.736 "type": "rebuild", 00:13:26.736 "target": "spare", 00:13:26.736 "progress": { 00:13:26.736 "blocks": 14336, 00:13:26.736 "percent": 22 00:13:26.736 } 00:13:26.736 }, 00:13:26.736 "base_bdevs_list": [ 00:13:26.736 { 00:13:26.736 "name": "spare", 00:13:26.736 "uuid": "20e619fc-e6b4-5daa-b1a4-7164771b6f7f", 00:13:26.736 "is_configured": true, 00:13:26.736 "data_offset": 2048, 00:13:26.736 "data_size": 63488 00:13:26.736 }, 00:13:26.736 { 00:13:26.736 "name": "BaseBdev2", 00:13:26.736 "uuid": "2d91eccf-e123-501d-9b3a-b6d8ee8bd8d7", 00:13:26.736 "is_configured": true, 00:13:26.736 "data_offset": 2048, 00:13:26.736 "data_size": 63488 00:13:26.736 }, 00:13:26.736 { 00:13:26.736 "name": "BaseBdev3", 00:13:26.736 "uuid": "38036bf2-d4f8-59ff-834e-c9ac4d669194", 00:13:26.736 "is_configured": true, 00:13:26.736 "data_offset": 2048, 00:13:26.736 "data_size": 63488 00:13:26.736 }, 00:13:26.736 { 00:13:26.736 "name": "BaseBdev4", 00:13:26.736 "uuid": "8aadc2f4-6a34-5250-b4a1-99ae1ecba68d", 00:13:26.736 "is_configured": true, 00:13:26.736 "data_offset": 2048, 00:13:26.736 "data_size": 63488 00:13:26.736 } 00:13:26.736 ] 00:13:26.736 }' 00:13:26.736 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.736 [2024-11-18 03:13:30.248145] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:26.736 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:26.736 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.736 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.736 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:26.736 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:26.736 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:26.736 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:26.736 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:26.736 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:26.736 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:26.736 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.736 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.996 [2024-11-18 03:13:30.310991] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:27.256 [2024-11-18 03:13:30.571913] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:13:27.256 [2024-11-18 03:13:30.571950] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.256 "name": "raid_bdev1", 00:13:27.256 "uuid": "d64adbaf-55f8-405f-b98a-402b1e698e36", 00:13:27.256 "strip_size_kb": 0, 00:13:27.256 "state": "online", 00:13:27.256 "raid_level": "raid1", 00:13:27.256 "superblock": true, 00:13:27.256 "num_base_bdevs": 4, 00:13:27.256 "num_base_bdevs_discovered": 3, 00:13:27.256 "num_base_bdevs_operational": 3, 00:13:27.256 "process": { 00:13:27.256 "type": "rebuild", 00:13:27.256 "target": "spare", 00:13:27.256 "progress": { 00:13:27.256 "blocks": 18432, 00:13:27.256 "percent": 29 00:13:27.256 } 00:13:27.256 }, 00:13:27.256 "base_bdevs_list": [ 00:13:27.256 { 00:13:27.256 "name": "spare", 00:13:27.256 "uuid": "20e619fc-e6b4-5daa-b1a4-7164771b6f7f", 00:13:27.256 "is_configured": true, 00:13:27.256 "data_offset": 2048, 00:13:27.256 "data_size": 63488 00:13:27.256 }, 00:13:27.256 { 00:13:27.256 "name": null, 00:13:27.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.256 "is_configured": false, 00:13:27.256 "data_offset": 0, 00:13:27.256 "data_size": 63488 00:13:27.256 }, 00:13:27.256 { 00:13:27.256 "name": "BaseBdev3", 00:13:27.256 "uuid": "38036bf2-d4f8-59ff-834e-c9ac4d669194", 00:13:27.256 "is_configured": true, 00:13:27.256 "data_offset": 2048, 00:13:27.256 "data_size": 63488 00:13:27.256 }, 00:13:27.256 { 00:13:27.256 "name": "BaseBdev4", 00:13:27.256 "uuid": "8aadc2f4-6a34-5250-b4a1-99ae1ecba68d", 00:13:27.256 "is_configured": true, 00:13:27.256 "data_offset": 2048, 00:13:27.256 "data_size": 63488 00:13:27.256 } 00:13:27.256 ] 00:13:27.256 }' 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=404 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.256 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.256 "name": "raid_bdev1", 00:13:27.256 "uuid": "d64adbaf-55f8-405f-b98a-402b1e698e36", 00:13:27.256 "strip_size_kb": 0, 00:13:27.256 "state": "online", 00:13:27.256 "raid_level": "raid1", 00:13:27.256 "superblock": true, 00:13:27.256 "num_base_bdevs": 4, 00:13:27.256 "num_base_bdevs_discovered": 3, 00:13:27.256 "num_base_bdevs_operational": 3, 00:13:27.256 "process": { 00:13:27.256 "type": "rebuild", 00:13:27.256 "target": "spare", 00:13:27.256 "progress": { 00:13:27.256 "blocks": 20480, 00:13:27.256 "percent": 32 00:13:27.256 } 00:13:27.256 }, 00:13:27.256 "base_bdevs_list": [ 00:13:27.256 { 00:13:27.256 "name": "spare", 00:13:27.256 "uuid": "20e619fc-e6b4-5daa-b1a4-7164771b6f7f", 00:13:27.256 "is_configured": true, 00:13:27.256 "data_offset": 2048, 00:13:27.256 "data_size": 63488 00:13:27.256 }, 00:13:27.256 { 00:13:27.256 "name": null, 00:13:27.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.256 "is_configured": false, 00:13:27.256 "data_offset": 0, 00:13:27.256 "data_size": 63488 00:13:27.256 }, 00:13:27.256 { 00:13:27.256 "name": "BaseBdev3", 00:13:27.256 "uuid": "38036bf2-d4f8-59ff-834e-c9ac4d669194", 00:13:27.256 "is_configured": true, 00:13:27.256 "data_offset": 2048, 00:13:27.256 "data_size": 63488 00:13:27.256 }, 00:13:27.256 { 00:13:27.256 "name": "BaseBdev4", 00:13:27.256 "uuid": "8aadc2f4-6a34-5250-b4a1-99ae1ecba68d", 00:13:27.256 "is_configured": true, 00:13:27.256 "data_offset": 2048, 00:13:27.256 "data_size": 63488 00:13:27.256 } 00:13:27.256 ] 00:13:27.257 }' 00:13:27.257 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.257 [2024-11-18 03:13:30.790939] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:27.257 [2024-11-18 03:13:30.797298] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:27.257 133.00 IOPS, 399.00 MiB/s [2024-11-18T03:13:30.834Z] 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.257 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.516 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.516 03:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:27.776 [2024-11-18 03:13:31.135840] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:28.347 119.00 IOPS, 357.00 MiB/s [2024-11-18T03:13:31.924Z] [2024-11-18 03:13:31.823542] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:28.347 03:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:28.347 03:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:28.347 03:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.347 03:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:28.347 03:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:28.347 03:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.347 03:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.347 03:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.347 03:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.347 03:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.347 03:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.347 03:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.347 "name": "raid_bdev1", 00:13:28.347 "uuid": "d64adbaf-55f8-405f-b98a-402b1e698e36", 00:13:28.347 "strip_size_kb": 0, 00:13:28.347 "state": "online", 00:13:28.347 "raid_level": "raid1", 00:13:28.347 "superblock": true, 00:13:28.347 "num_base_bdevs": 4, 00:13:28.347 "num_base_bdevs_discovered": 3, 00:13:28.347 "num_base_bdevs_operational": 3, 00:13:28.347 "process": { 00:13:28.347 "type": "rebuild", 00:13:28.347 "target": "spare", 00:13:28.347 "progress": { 00:13:28.347 "blocks": 38912, 00:13:28.347 "percent": 61 00:13:28.347 } 00:13:28.347 }, 00:13:28.347 "base_bdevs_list": [ 00:13:28.347 { 00:13:28.347 "name": "spare", 00:13:28.347 "uuid": "20e619fc-e6b4-5daa-b1a4-7164771b6f7f", 00:13:28.347 "is_configured": true, 00:13:28.347 "data_offset": 2048, 00:13:28.347 "data_size": 63488 00:13:28.347 }, 00:13:28.347 { 00:13:28.347 "name": null, 00:13:28.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.347 "is_configured": false, 00:13:28.347 "data_offset": 0, 00:13:28.347 "data_size": 63488 00:13:28.347 }, 00:13:28.347 { 00:13:28.347 "name": "BaseBdev3", 00:13:28.347 "uuid": "38036bf2-d4f8-59ff-834e-c9ac4d669194", 00:13:28.347 "is_configured": true, 00:13:28.347 "data_offset": 2048, 00:13:28.347 "data_size": 63488 00:13:28.347 }, 00:13:28.347 { 00:13:28.347 "name": "BaseBdev4", 00:13:28.347 "uuid": "8aadc2f4-6a34-5250-b4a1-99ae1ecba68d", 00:13:28.347 "is_configured": true, 00:13:28.347 "data_offset": 2048, 00:13:28.347 "data_size": 63488 00:13:28.347 } 00:13:28.347 ] 00:13:28.347 }' 00:13:28.607 03:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.607 [2024-11-18 03:13:31.943974] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:28.607 [2024-11-18 03:13:31.944321] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:28.607 03:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:28.607 03:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.607 03:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:28.607 03:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:28.867 [2024-11-18 03:13:32.399496] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:29.437 [2024-11-18 03:13:32.735881] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:29.437 [2024-11-18 03:13:32.736404] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:29.437 105.67 IOPS, 317.00 MiB/s [2024-11-18T03:13:33.014Z] 03:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:29.437 03:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:29.437 03:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.437 03:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:29.437 03:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:29.437 03:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.437 03:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.437 03:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.437 03:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.437 03:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.698 03:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.698 03:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.698 "name": "raid_bdev1", 00:13:29.698 "uuid": "d64adbaf-55f8-405f-b98a-402b1e698e36", 00:13:29.698 "strip_size_kb": 0, 00:13:29.698 "state": "online", 00:13:29.698 "raid_level": "raid1", 00:13:29.698 "superblock": true, 00:13:29.698 "num_base_bdevs": 4, 00:13:29.698 "num_base_bdevs_discovered": 3, 00:13:29.698 "num_base_bdevs_operational": 3, 00:13:29.698 "process": { 00:13:29.698 "type": "rebuild", 00:13:29.698 "target": "spare", 00:13:29.698 "progress": { 00:13:29.698 "blocks": 55296, 00:13:29.698 "percent": 87 00:13:29.698 } 00:13:29.698 }, 00:13:29.698 "base_bdevs_list": [ 00:13:29.698 { 00:13:29.698 "name": "spare", 00:13:29.698 "uuid": "20e619fc-e6b4-5daa-b1a4-7164771b6f7f", 00:13:29.698 "is_configured": true, 00:13:29.698 "data_offset": 2048, 00:13:29.698 "data_size": 63488 00:13:29.698 }, 00:13:29.698 { 00:13:29.698 "name": null, 00:13:29.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.698 "is_configured": false, 00:13:29.698 "data_offset": 0, 00:13:29.698 "data_size": 63488 00:13:29.698 }, 00:13:29.698 { 00:13:29.698 "name": "BaseBdev3", 00:13:29.698 "uuid": "38036bf2-d4f8-59ff-834e-c9ac4d669194", 00:13:29.698 "is_configured": true, 00:13:29.698 "data_offset": 2048, 00:13:29.698 "data_size": 63488 00:13:29.698 }, 00:13:29.698 { 00:13:29.698 "name": "BaseBdev4", 00:13:29.698 "uuid": "8aadc2f4-6a34-5250-b4a1-99ae1ecba68d", 00:13:29.698 "is_configured": true, 00:13:29.698 "data_offset": 2048, 00:13:29.698 "data_size": 63488 00:13:29.698 } 00:13:29.698 ] 00:13:29.698 }' 00:13:29.698 03:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.698 [2024-11-18 03:13:33.068748] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:29.698 03:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:29.698 03:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.698 03:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:29.698 03:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:29.958 [2024-11-18 03:13:33.494783] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:30.218 [2024-11-18 03:13:33.592788] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:30.218 [2024-11-18 03:13:33.595077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.738 97.29 IOPS, 291.86 MiB/s [2024-11-18T03:13:34.315Z] 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:30.738 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:30.738 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.738 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:30.738 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:30.738 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.738 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.738 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.738 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.738 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.738 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.738 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.738 "name": "raid_bdev1", 00:13:30.738 "uuid": "d64adbaf-55f8-405f-b98a-402b1e698e36", 00:13:30.738 "strip_size_kb": 0, 00:13:30.738 "state": "online", 00:13:30.738 "raid_level": "raid1", 00:13:30.738 "superblock": true, 00:13:30.738 "num_base_bdevs": 4, 00:13:30.738 "num_base_bdevs_discovered": 3, 00:13:30.738 "num_base_bdevs_operational": 3, 00:13:30.738 "base_bdevs_list": [ 00:13:30.738 { 00:13:30.738 "name": "spare", 00:13:30.738 "uuid": "20e619fc-e6b4-5daa-b1a4-7164771b6f7f", 00:13:30.738 "is_configured": true, 00:13:30.738 "data_offset": 2048, 00:13:30.738 "data_size": 63488 00:13:30.738 }, 00:13:30.738 { 00:13:30.738 "name": null, 00:13:30.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.738 "is_configured": false, 00:13:30.738 "data_offset": 0, 00:13:30.738 "data_size": 63488 00:13:30.738 }, 00:13:30.738 { 00:13:30.738 "name": "BaseBdev3", 00:13:30.738 "uuid": "38036bf2-d4f8-59ff-834e-c9ac4d669194", 00:13:30.738 "is_configured": true, 00:13:30.738 "data_offset": 2048, 00:13:30.738 "data_size": 63488 00:13:30.738 }, 00:13:30.738 { 00:13:30.738 "name": "BaseBdev4", 00:13:30.738 "uuid": "8aadc2f4-6a34-5250-b4a1-99ae1ecba68d", 00:13:30.738 "is_configured": true, 00:13:30.738 "data_offset": 2048, 00:13:30.738 "data_size": 63488 00:13:30.738 } 00:13:30.738 ] 00:13:30.738 }' 00:13:30.738 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.738 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:30.738 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.998 "name": "raid_bdev1", 00:13:30.998 "uuid": "d64adbaf-55f8-405f-b98a-402b1e698e36", 00:13:30.998 "strip_size_kb": 0, 00:13:30.998 "state": "online", 00:13:30.998 "raid_level": "raid1", 00:13:30.998 "superblock": true, 00:13:30.998 "num_base_bdevs": 4, 00:13:30.998 "num_base_bdevs_discovered": 3, 00:13:30.998 "num_base_bdevs_operational": 3, 00:13:30.998 "base_bdevs_list": [ 00:13:30.998 { 00:13:30.998 "name": "spare", 00:13:30.998 "uuid": "20e619fc-e6b4-5daa-b1a4-7164771b6f7f", 00:13:30.998 "is_configured": true, 00:13:30.998 "data_offset": 2048, 00:13:30.998 "data_size": 63488 00:13:30.998 }, 00:13:30.998 { 00:13:30.998 "name": null, 00:13:30.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.998 "is_configured": false, 00:13:30.998 "data_offset": 0, 00:13:30.998 "data_size": 63488 00:13:30.998 }, 00:13:30.998 { 00:13:30.998 "name": "BaseBdev3", 00:13:30.998 "uuid": "38036bf2-d4f8-59ff-834e-c9ac4d669194", 00:13:30.998 "is_configured": true, 00:13:30.998 "data_offset": 2048, 00:13:30.998 "data_size": 63488 00:13:30.998 }, 00:13:30.998 { 00:13:30.998 "name": "BaseBdev4", 00:13:30.998 "uuid": "8aadc2f4-6a34-5250-b4a1-99ae1ecba68d", 00:13:30.998 "is_configured": true, 00:13:30.998 "data_offset": 2048, 00:13:30.998 "data_size": 63488 00:13:30.998 } 00:13:30.998 ] 00:13:30.998 }' 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.998 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.998 "name": "raid_bdev1", 00:13:30.998 "uuid": "d64adbaf-55f8-405f-b98a-402b1e698e36", 00:13:30.998 "strip_size_kb": 0, 00:13:30.999 "state": "online", 00:13:30.999 "raid_level": "raid1", 00:13:30.999 "superblock": true, 00:13:30.999 "num_base_bdevs": 4, 00:13:30.999 "num_base_bdevs_discovered": 3, 00:13:30.999 "num_base_bdevs_operational": 3, 00:13:30.999 "base_bdevs_list": [ 00:13:30.999 { 00:13:30.999 "name": "spare", 00:13:30.999 "uuid": "20e619fc-e6b4-5daa-b1a4-7164771b6f7f", 00:13:30.999 "is_configured": true, 00:13:30.999 "data_offset": 2048, 00:13:30.999 "data_size": 63488 00:13:30.999 }, 00:13:30.999 { 00:13:30.999 "name": null, 00:13:30.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.999 "is_configured": false, 00:13:30.999 "data_offset": 0, 00:13:30.999 "data_size": 63488 00:13:30.999 }, 00:13:30.999 { 00:13:30.999 "name": "BaseBdev3", 00:13:30.999 "uuid": "38036bf2-d4f8-59ff-834e-c9ac4d669194", 00:13:30.999 "is_configured": true, 00:13:30.999 "data_offset": 2048, 00:13:30.999 "data_size": 63488 00:13:30.999 }, 00:13:30.999 { 00:13:30.999 "name": "BaseBdev4", 00:13:30.999 "uuid": "8aadc2f4-6a34-5250-b4a1-99ae1ecba68d", 00:13:30.999 "is_configured": true, 00:13:30.999 "data_offset": 2048, 00:13:30.999 "data_size": 63488 00:13:30.999 } 00:13:30.999 ] 00:13:30.999 }' 00:13:30.999 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.999 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.518 89.62 IOPS, 268.88 MiB/s [2024-11-18T03:13:35.095Z] 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:31.518 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.518 03:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.518 [2024-11-18 03:13:34.973348] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:31.519 [2024-11-18 03:13:34.973382] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:31.519 00:13:31.519 Latency(us) 00:13:31.519 [2024-11-18T03:13:35.096Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.519 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:31.519 raid_bdev1 : 8.20 88.53 265.60 0.00 0.00 15621.46 295.13 113557.58 00:13:31.519 [2024-11-18T03:13:35.096Z] =================================================================================================================== 00:13:31.519 [2024-11-18T03:13:35.096Z] Total : 88.53 265.60 0.00 0.00 15621.46 295.13 113557.58 00:13:31.519 { 00:13:31.519 "results": [ 00:13:31.519 { 00:13:31.519 "job": "raid_bdev1", 00:13:31.519 "core_mask": "0x1", 00:13:31.519 "workload": "randrw", 00:13:31.519 "percentage": 50, 00:13:31.519 "status": "finished", 00:13:31.519 "queue_depth": 2, 00:13:31.519 "io_size": 3145728, 00:13:31.519 "runtime": 8.20031, 00:13:31.519 "iops": 88.53323837757353, 00:13:31.519 "mibps": 265.5997151327206, 00:13:31.519 "io_failed": 0, 00:13:31.519 "io_timeout": 0, 00:13:31.519 "avg_latency_us": 15621.460175394275, 00:13:31.519 "min_latency_us": 295.12663755458516, 00:13:31.519 "max_latency_us": 113557.57554585153 00:13:31.519 } 00:13:31.519 ], 00:13:31.519 "core_count": 1 00:13:31.519 } 00:13:31.519 [2024-11-18 03:13:35.000943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.519 [2024-11-18 03:13:35.001000] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.519 [2024-11-18 03:13:35.001112] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:31.519 [2024-11-18 03:13:35.001126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:31.519 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.519 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.519 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:31.519 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.519 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.519 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.519 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:31.519 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:31.519 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:31.519 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:31.519 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:31.519 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:31.519 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:31.519 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:31.519 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:31.519 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:31.519 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:31.519 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:31.519 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:31.781 /dev/nbd0 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:31.781 1+0 records in 00:13:31.781 1+0 records out 00:13:31.781 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408823 s, 10.0 MB/s 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:31.781 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:32.071 /dev/nbd1 00:13:32.071 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:32.071 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:32.071 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:32.071 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:32.071 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:32.071 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:32.071 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:32.071 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:32.071 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:32.071 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:32.071 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:32.071 1+0 records in 00:13:32.071 1+0 records out 00:13:32.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448694 s, 9.1 MB/s 00:13:32.071 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.071 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:32.071 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.071 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:32.071 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:32.071 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:32.071 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:32.071 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:32.071 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:32.071 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:32.071 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:32.071 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:32.071 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:32.071 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:32.071 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:32.336 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:32.336 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:32.336 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:32.336 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:32.336 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:32.336 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:32.336 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:32.336 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:32.336 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:32.336 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:32.336 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:32.336 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:32.336 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:32.336 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:32.336 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:32.336 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:32.336 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:32.336 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:32.336 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:32.336 03:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:32.596 /dev/nbd1 00:13:32.596 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:32.596 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:32.596 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:32.596 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:32.596 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:32.596 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:32.596 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:32.596 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:32.596 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:32.596 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:32.596 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:32.596 1+0 records in 00:13:32.596 1+0 records out 00:13:32.596 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388924 s, 10.5 MB/s 00:13:32.596 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.596 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:32.596 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.596 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:32.596 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:32.596 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:32.596 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:32.596 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:32.597 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:32.597 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:32.597 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:32.597 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:32.597 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:32.597 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:32.597 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:32.857 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:32.857 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:32.857 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:32.857 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:32.857 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:32.857 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:32.857 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:32.857 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:32.857 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:32.857 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:32.857 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:32.857 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:32.857 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:32.857 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:32.857 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.118 [2024-11-18 03:13:36.543085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:33.118 [2024-11-18 03:13:36.543497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.118 [2024-11-18 03:13:36.543582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:33.118 [2024-11-18 03:13:36.543649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.118 [2024-11-18 03:13:36.545885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.118 [2024-11-18 03:13:36.546023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:33.118 [2024-11-18 03:13:36.546212] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:33.118 [2024-11-18 03:13:36.546284] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:33.118 [2024-11-18 03:13:36.546405] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:33.118 [2024-11-18 03:13:36.546514] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:33.118 spare 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.118 [2024-11-18 03:13:36.646403] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:13:33.118 [2024-11-18 03:13:36.646440] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:33.118 [2024-11-18 03:13:36.646722] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000036fc0 00:13:33.118 [2024-11-18 03:13:36.646877] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:13:33.118 [2024-11-18 03:13:36.646887] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:13:33.118 [2024-11-18 03:13:36.647058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.118 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.379 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.379 "name": "raid_bdev1", 00:13:33.379 "uuid": "d64adbaf-55f8-405f-b98a-402b1e698e36", 00:13:33.379 "strip_size_kb": 0, 00:13:33.379 "state": "online", 00:13:33.379 "raid_level": "raid1", 00:13:33.379 "superblock": true, 00:13:33.379 "num_base_bdevs": 4, 00:13:33.379 "num_base_bdevs_discovered": 3, 00:13:33.379 "num_base_bdevs_operational": 3, 00:13:33.379 "base_bdevs_list": [ 00:13:33.379 { 00:13:33.379 "name": "spare", 00:13:33.379 "uuid": "20e619fc-e6b4-5daa-b1a4-7164771b6f7f", 00:13:33.379 "is_configured": true, 00:13:33.379 "data_offset": 2048, 00:13:33.379 "data_size": 63488 00:13:33.379 }, 00:13:33.379 { 00:13:33.379 "name": null, 00:13:33.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.379 "is_configured": false, 00:13:33.379 "data_offset": 2048, 00:13:33.379 "data_size": 63488 00:13:33.379 }, 00:13:33.379 { 00:13:33.379 "name": "BaseBdev3", 00:13:33.379 "uuid": "38036bf2-d4f8-59ff-834e-c9ac4d669194", 00:13:33.379 "is_configured": true, 00:13:33.379 "data_offset": 2048, 00:13:33.379 "data_size": 63488 00:13:33.379 }, 00:13:33.379 { 00:13:33.379 "name": "BaseBdev4", 00:13:33.379 "uuid": "8aadc2f4-6a34-5250-b4a1-99ae1ecba68d", 00:13:33.379 "is_configured": true, 00:13:33.379 "data_offset": 2048, 00:13:33.379 "data_size": 63488 00:13:33.379 } 00:13:33.379 ] 00:13:33.379 }' 00:13:33.379 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.379 03:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.640 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:33.640 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.640 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:33.640 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:33.640 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.640 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.640 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.640 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.640 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.640 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.640 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.640 "name": "raid_bdev1", 00:13:33.640 "uuid": "d64adbaf-55f8-405f-b98a-402b1e698e36", 00:13:33.640 "strip_size_kb": 0, 00:13:33.640 "state": "online", 00:13:33.640 "raid_level": "raid1", 00:13:33.640 "superblock": true, 00:13:33.640 "num_base_bdevs": 4, 00:13:33.640 "num_base_bdevs_discovered": 3, 00:13:33.640 "num_base_bdevs_operational": 3, 00:13:33.640 "base_bdevs_list": [ 00:13:33.640 { 00:13:33.640 "name": "spare", 00:13:33.640 "uuid": "20e619fc-e6b4-5daa-b1a4-7164771b6f7f", 00:13:33.640 "is_configured": true, 00:13:33.640 "data_offset": 2048, 00:13:33.640 "data_size": 63488 00:13:33.640 }, 00:13:33.640 { 00:13:33.640 "name": null, 00:13:33.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.640 "is_configured": false, 00:13:33.640 "data_offset": 2048, 00:13:33.640 "data_size": 63488 00:13:33.640 }, 00:13:33.640 { 00:13:33.640 "name": "BaseBdev3", 00:13:33.640 "uuid": "38036bf2-d4f8-59ff-834e-c9ac4d669194", 00:13:33.640 "is_configured": true, 00:13:33.640 "data_offset": 2048, 00:13:33.640 "data_size": 63488 00:13:33.640 }, 00:13:33.640 { 00:13:33.640 "name": "BaseBdev4", 00:13:33.640 "uuid": "8aadc2f4-6a34-5250-b4a1-99ae1ecba68d", 00:13:33.640 "is_configured": true, 00:13:33.640 "data_offset": 2048, 00:13:33.640 "data_size": 63488 00:13:33.640 } 00:13:33.640 ] 00:13:33.640 }' 00:13:33.640 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.640 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:33.640 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.900 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:33.900 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.900 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.900 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.900 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:33.900 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.900 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:33.900 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:33.900 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.900 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.900 [2024-11-18 03:13:37.310089] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:33.900 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.900 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:33.900 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.900 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.900 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.900 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.901 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:33.901 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.901 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.901 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.901 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.901 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.901 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.901 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.901 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.901 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.901 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.901 "name": "raid_bdev1", 00:13:33.901 "uuid": "d64adbaf-55f8-405f-b98a-402b1e698e36", 00:13:33.901 "strip_size_kb": 0, 00:13:33.901 "state": "online", 00:13:33.901 "raid_level": "raid1", 00:13:33.901 "superblock": true, 00:13:33.901 "num_base_bdevs": 4, 00:13:33.901 "num_base_bdevs_discovered": 2, 00:13:33.901 "num_base_bdevs_operational": 2, 00:13:33.901 "base_bdevs_list": [ 00:13:33.901 { 00:13:33.901 "name": null, 00:13:33.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.901 "is_configured": false, 00:13:33.901 "data_offset": 0, 00:13:33.901 "data_size": 63488 00:13:33.901 }, 00:13:33.901 { 00:13:33.901 "name": null, 00:13:33.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.901 "is_configured": false, 00:13:33.901 "data_offset": 2048, 00:13:33.901 "data_size": 63488 00:13:33.901 }, 00:13:33.901 { 00:13:33.901 "name": "BaseBdev3", 00:13:33.901 "uuid": "38036bf2-d4f8-59ff-834e-c9ac4d669194", 00:13:33.901 "is_configured": true, 00:13:33.901 "data_offset": 2048, 00:13:33.901 "data_size": 63488 00:13:33.901 }, 00:13:33.901 { 00:13:33.901 "name": "BaseBdev4", 00:13:33.901 "uuid": "8aadc2f4-6a34-5250-b4a1-99ae1ecba68d", 00:13:33.901 "is_configured": true, 00:13:33.901 "data_offset": 2048, 00:13:33.901 "data_size": 63488 00:13:33.901 } 00:13:33.901 ] 00:13:33.901 }' 00:13:33.901 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.901 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.471 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:34.471 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.471 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.471 [2024-11-18 03:13:37.749422] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:34.471 [2024-11-18 03:13:37.749667] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:34.471 [2024-11-18 03:13:37.749731] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:34.471 [2024-11-18 03:13:37.749812] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:34.471 [2024-11-18 03:13:37.753506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037090 00:13:34.471 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.471 03:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:34.471 [2024-11-18 03:13:37.755510] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:35.412 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:35.412 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.412 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:35.412 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:35.412 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.412 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.412 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.412 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.412 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.412 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.412 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.412 "name": "raid_bdev1", 00:13:35.412 "uuid": "d64adbaf-55f8-405f-b98a-402b1e698e36", 00:13:35.412 "strip_size_kb": 0, 00:13:35.412 "state": "online", 00:13:35.412 "raid_level": "raid1", 00:13:35.412 "superblock": true, 00:13:35.412 "num_base_bdevs": 4, 00:13:35.412 "num_base_bdevs_discovered": 3, 00:13:35.412 "num_base_bdevs_operational": 3, 00:13:35.412 "process": { 00:13:35.412 "type": "rebuild", 00:13:35.412 "target": "spare", 00:13:35.412 "progress": { 00:13:35.412 "blocks": 20480, 00:13:35.412 "percent": 32 00:13:35.412 } 00:13:35.412 }, 00:13:35.413 "base_bdevs_list": [ 00:13:35.413 { 00:13:35.413 "name": "spare", 00:13:35.413 "uuid": "20e619fc-e6b4-5daa-b1a4-7164771b6f7f", 00:13:35.413 "is_configured": true, 00:13:35.413 "data_offset": 2048, 00:13:35.413 "data_size": 63488 00:13:35.413 }, 00:13:35.413 { 00:13:35.413 "name": null, 00:13:35.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.413 "is_configured": false, 00:13:35.413 "data_offset": 2048, 00:13:35.413 "data_size": 63488 00:13:35.413 }, 00:13:35.413 { 00:13:35.413 "name": "BaseBdev3", 00:13:35.413 "uuid": "38036bf2-d4f8-59ff-834e-c9ac4d669194", 00:13:35.413 "is_configured": true, 00:13:35.413 "data_offset": 2048, 00:13:35.413 "data_size": 63488 00:13:35.413 }, 00:13:35.413 { 00:13:35.413 "name": "BaseBdev4", 00:13:35.413 "uuid": "8aadc2f4-6a34-5250-b4a1-99ae1ecba68d", 00:13:35.413 "is_configured": true, 00:13:35.413 "data_offset": 2048, 00:13:35.413 "data_size": 63488 00:13:35.413 } 00:13:35.413 ] 00:13:35.413 }' 00:13:35.413 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.413 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:35.413 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.413 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:35.413 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:35.413 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.413 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.413 [2024-11-18 03:13:38.912395] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:35.413 [2024-11-18 03:13:38.959839] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:35.413 [2024-11-18 03:13:38.959904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.413 [2024-11-18 03:13:38.959920] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:35.413 [2024-11-18 03:13:38.959928] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:35.413 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.413 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:35.413 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.413 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.413 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.413 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.413 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:35.413 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.413 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.413 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.413 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.413 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.413 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.413 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.413 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.673 03:13:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.673 03:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.673 "name": "raid_bdev1", 00:13:35.673 "uuid": "d64adbaf-55f8-405f-b98a-402b1e698e36", 00:13:35.673 "strip_size_kb": 0, 00:13:35.673 "state": "online", 00:13:35.673 "raid_level": "raid1", 00:13:35.673 "superblock": true, 00:13:35.673 "num_base_bdevs": 4, 00:13:35.673 "num_base_bdevs_discovered": 2, 00:13:35.673 "num_base_bdevs_operational": 2, 00:13:35.673 "base_bdevs_list": [ 00:13:35.673 { 00:13:35.673 "name": null, 00:13:35.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.673 "is_configured": false, 00:13:35.673 "data_offset": 0, 00:13:35.673 "data_size": 63488 00:13:35.673 }, 00:13:35.673 { 00:13:35.673 "name": null, 00:13:35.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.673 "is_configured": false, 00:13:35.673 "data_offset": 2048, 00:13:35.673 "data_size": 63488 00:13:35.673 }, 00:13:35.673 { 00:13:35.673 "name": "BaseBdev3", 00:13:35.673 "uuid": "38036bf2-d4f8-59ff-834e-c9ac4d669194", 00:13:35.673 "is_configured": true, 00:13:35.673 "data_offset": 2048, 00:13:35.673 "data_size": 63488 00:13:35.673 }, 00:13:35.673 { 00:13:35.673 "name": "BaseBdev4", 00:13:35.673 "uuid": "8aadc2f4-6a34-5250-b4a1-99ae1ecba68d", 00:13:35.673 "is_configured": true, 00:13:35.673 "data_offset": 2048, 00:13:35.673 "data_size": 63488 00:13:35.673 } 00:13:35.673 ] 00:13:35.673 }' 00:13:35.673 03:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.673 03:13:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.933 03:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:35.933 03:13:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.933 03:13:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.933 [2024-11-18 03:13:39.391229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:35.933 [2024-11-18 03:13:39.391346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.933 [2024-11-18 03:13:39.391386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:35.933 [2024-11-18 03:13:39.391417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.933 [2024-11-18 03:13:39.391874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.933 [2024-11-18 03:13:39.391936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:35.933 [2024-11-18 03:13:39.392061] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:35.933 [2024-11-18 03:13:39.392106] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:35.933 [2024-11-18 03:13:39.392152] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:35.933 [2024-11-18 03:13:39.392214] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:35.933 [2024-11-18 03:13:39.395853] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:13:35.933 spare 00:13:35.933 03:13:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.933 03:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:35.933 [2024-11-18 03:13:39.397805] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:36.873 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.873 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.873 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.873 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.873 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.873 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.873 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.873 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.873 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.873 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.134 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.134 "name": "raid_bdev1", 00:13:37.134 "uuid": "d64adbaf-55f8-405f-b98a-402b1e698e36", 00:13:37.134 "strip_size_kb": 0, 00:13:37.134 "state": "online", 00:13:37.134 "raid_level": "raid1", 00:13:37.134 "superblock": true, 00:13:37.134 "num_base_bdevs": 4, 00:13:37.134 "num_base_bdevs_discovered": 3, 00:13:37.134 "num_base_bdevs_operational": 3, 00:13:37.134 "process": { 00:13:37.134 "type": "rebuild", 00:13:37.134 "target": "spare", 00:13:37.134 "progress": { 00:13:37.134 "blocks": 20480, 00:13:37.134 "percent": 32 00:13:37.134 } 00:13:37.134 }, 00:13:37.134 "base_bdevs_list": [ 00:13:37.134 { 00:13:37.134 "name": "spare", 00:13:37.134 "uuid": "20e619fc-e6b4-5daa-b1a4-7164771b6f7f", 00:13:37.134 "is_configured": true, 00:13:37.134 "data_offset": 2048, 00:13:37.134 "data_size": 63488 00:13:37.134 }, 00:13:37.134 { 00:13:37.134 "name": null, 00:13:37.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.134 "is_configured": false, 00:13:37.134 "data_offset": 2048, 00:13:37.134 "data_size": 63488 00:13:37.134 }, 00:13:37.134 { 00:13:37.134 "name": "BaseBdev3", 00:13:37.134 "uuid": "38036bf2-d4f8-59ff-834e-c9ac4d669194", 00:13:37.134 "is_configured": true, 00:13:37.134 "data_offset": 2048, 00:13:37.134 "data_size": 63488 00:13:37.134 }, 00:13:37.134 { 00:13:37.134 "name": "BaseBdev4", 00:13:37.134 "uuid": "8aadc2f4-6a34-5250-b4a1-99ae1ecba68d", 00:13:37.134 "is_configured": true, 00:13:37.134 "data_offset": 2048, 00:13:37.134 "data_size": 63488 00:13:37.134 } 00:13:37.134 ] 00:13:37.134 }' 00:13:37.134 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.134 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.134 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.134 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.134 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:37.134 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.134 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.134 [2024-11-18 03:13:40.546644] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:37.134 [2024-11-18 03:13:40.601968] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:37.134 [2024-11-18 03:13:40.602094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.134 [2024-11-18 03:13:40.602132] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:37.134 [2024-11-18 03:13:40.602153] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:37.134 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.134 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:37.134 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.134 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.134 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.134 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.134 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:37.134 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.134 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.134 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.134 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.134 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.134 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.134 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.134 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.134 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.134 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.134 "name": "raid_bdev1", 00:13:37.135 "uuid": "d64adbaf-55f8-405f-b98a-402b1e698e36", 00:13:37.135 "strip_size_kb": 0, 00:13:37.135 "state": "online", 00:13:37.135 "raid_level": "raid1", 00:13:37.135 "superblock": true, 00:13:37.135 "num_base_bdevs": 4, 00:13:37.135 "num_base_bdevs_discovered": 2, 00:13:37.135 "num_base_bdevs_operational": 2, 00:13:37.135 "base_bdevs_list": [ 00:13:37.135 { 00:13:37.135 "name": null, 00:13:37.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.135 "is_configured": false, 00:13:37.135 "data_offset": 0, 00:13:37.135 "data_size": 63488 00:13:37.135 }, 00:13:37.135 { 00:13:37.135 "name": null, 00:13:37.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.135 "is_configured": false, 00:13:37.135 "data_offset": 2048, 00:13:37.135 "data_size": 63488 00:13:37.135 }, 00:13:37.135 { 00:13:37.135 "name": "BaseBdev3", 00:13:37.135 "uuid": "38036bf2-d4f8-59ff-834e-c9ac4d669194", 00:13:37.135 "is_configured": true, 00:13:37.135 "data_offset": 2048, 00:13:37.135 "data_size": 63488 00:13:37.135 }, 00:13:37.135 { 00:13:37.135 "name": "BaseBdev4", 00:13:37.135 "uuid": "8aadc2f4-6a34-5250-b4a1-99ae1ecba68d", 00:13:37.135 "is_configured": true, 00:13:37.135 "data_offset": 2048, 00:13:37.135 "data_size": 63488 00:13:37.135 } 00:13:37.135 ] 00:13:37.135 }' 00:13:37.135 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.135 03:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.706 03:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:37.706 03:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.706 03:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:37.706 03:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:37.706 03:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.706 03:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.706 03:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.706 03:13:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.706 03:13:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.706 03:13:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.706 03:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.706 "name": "raid_bdev1", 00:13:37.706 "uuid": "d64adbaf-55f8-405f-b98a-402b1e698e36", 00:13:37.706 "strip_size_kb": 0, 00:13:37.706 "state": "online", 00:13:37.706 "raid_level": "raid1", 00:13:37.706 "superblock": true, 00:13:37.706 "num_base_bdevs": 4, 00:13:37.706 "num_base_bdevs_discovered": 2, 00:13:37.706 "num_base_bdevs_operational": 2, 00:13:37.706 "base_bdevs_list": [ 00:13:37.706 { 00:13:37.706 "name": null, 00:13:37.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.706 "is_configured": false, 00:13:37.706 "data_offset": 0, 00:13:37.706 "data_size": 63488 00:13:37.706 }, 00:13:37.706 { 00:13:37.706 "name": null, 00:13:37.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.706 "is_configured": false, 00:13:37.706 "data_offset": 2048, 00:13:37.706 "data_size": 63488 00:13:37.706 }, 00:13:37.706 { 00:13:37.706 "name": "BaseBdev3", 00:13:37.706 "uuid": "38036bf2-d4f8-59ff-834e-c9ac4d669194", 00:13:37.706 "is_configured": true, 00:13:37.706 "data_offset": 2048, 00:13:37.706 "data_size": 63488 00:13:37.706 }, 00:13:37.706 { 00:13:37.706 "name": "BaseBdev4", 00:13:37.706 "uuid": "8aadc2f4-6a34-5250-b4a1-99ae1ecba68d", 00:13:37.706 "is_configured": true, 00:13:37.706 "data_offset": 2048, 00:13:37.706 "data_size": 63488 00:13:37.706 } 00:13:37.706 ] 00:13:37.706 }' 00:13:37.706 03:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.706 03:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:37.706 03:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.706 03:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:37.706 03:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:37.706 03:13:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.706 03:13:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.706 03:13:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.706 03:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:37.706 03:13:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.706 03:13:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.706 [2024-11-18 03:13:41.197290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:37.706 [2024-11-18 03:13:41.197401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.706 [2024-11-18 03:13:41.197429] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:13:37.706 [2024-11-18 03:13:41.197439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.706 [2024-11-18 03:13:41.197853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.706 [2024-11-18 03:13:41.197879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:37.706 [2024-11-18 03:13:41.197954] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:37.706 [2024-11-18 03:13:41.197981] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:37.706 [2024-11-18 03:13:41.198000] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:37.706 [2024-11-18 03:13:41.198010] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:37.706 BaseBdev1 00:13:37.706 03:13:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.706 03:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:38.647 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:38.647 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.647 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.647 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.647 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.647 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:38.647 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.647 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.647 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.647 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.647 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.647 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.647 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.647 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.907 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.907 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.907 "name": "raid_bdev1", 00:13:38.907 "uuid": "d64adbaf-55f8-405f-b98a-402b1e698e36", 00:13:38.907 "strip_size_kb": 0, 00:13:38.907 "state": "online", 00:13:38.907 "raid_level": "raid1", 00:13:38.907 "superblock": true, 00:13:38.907 "num_base_bdevs": 4, 00:13:38.907 "num_base_bdevs_discovered": 2, 00:13:38.907 "num_base_bdevs_operational": 2, 00:13:38.907 "base_bdevs_list": [ 00:13:38.907 { 00:13:38.907 "name": null, 00:13:38.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.907 "is_configured": false, 00:13:38.907 "data_offset": 0, 00:13:38.907 "data_size": 63488 00:13:38.907 }, 00:13:38.907 { 00:13:38.907 "name": null, 00:13:38.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.907 "is_configured": false, 00:13:38.907 "data_offset": 2048, 00:13:38.907 "data_size": 63488 00:13:38.907 }, 00:13:38.907 { 00:13:38.907 "name": "BaseBdev3", 00:13:38.907 "uuid": "38036bf2-d4f8-59ff-834e-c9ac4d669194", 00:13:38.907 "is_configured": true, 00:13:38.907 "data_offset": 2048, 00:13:38.907 "data_size": 63488 00:13:38.907 }, 00:13:38.907 { 00:13:38.907 "name": "BaseBdev4", 00:13:38.907 "uuid": "8aadc2f4-6a34-5250-b4a1-99ae1ecba68d", 00:13:38.907 "is_configured": true, 00:13:38.907 "data_offset": 2048, 00:13:38.907 "data_size": 63488 00:13:38.907 } 00:13:38.907 ] 00:13:38.907 }' 00:13:38.907 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.907 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.167 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:39.167 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.167 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:39.167 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:39.167 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.167 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.167 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.167 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.167 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.167 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.167 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.167 "name": "raid_bdev1", 00:13:39.167 "uuid": "d64adbaf-55f8-405f-b98a-402b1e698e36", 00:13:39.167 "strip_size_kb": 0, 00:13:39.167 "state": "online", 00:13:39.167 "raid_level": "raid1", 00:13:39.167 "superblock": true, 00:13:39.167 "num_base_bdevs": 4, 00:13:39.167 "num_base_bdevs_discovered": 2, 00:13:39.167 "num_base_bdevs_operational": 2, 00:13:39.167 "base_bdevs_list": [ 00:13:39.167 { 00:13:39.167 "name": null, 00:13:39.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.167 "is_configured": false, 00:13:39.168 "data_offset": 0, 00:13:39.168 "data_size": 63488 00:13:39.168 }, 00:13:39.168 { 00:13:39.168 "name": null, 00:13:39.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.168 "is_configured": false, 00:13:39.168 "data_offset": 2048, 00:13:39.168 "data_size": 63488 00:13:39.168 }, 00:13:39.168 { 00:13:39.168 "name": "BaseBdev3", 00:13:39.168 "uuid": "38036bf2-d4f8-59ff-834e-c9ac4d669194", 00:13:39.168 "is_configured": true, 00:13:39.168 "data_offset": 2048, 00:13:39.168 "data_size": 63488 00:13:39.168 }, 00:13:39.168 { 00:13:39.168 "name": "BaseBdev4", 00:13:39.168 "uuid": "8aadc2f4-6a34-5250-b4a1-99ae1ecba68d", 00:13:39.168 "is_configured": true, 00:13:39.168 "data_offset": 2048, 00:13:39.168 "data_size": 63488 00:13:39.168 } 00:13:39.168 ] 00:13:39.168 }' 00:13:39.168 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.168 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:39.168 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.168 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:39.168 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:39.168 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:13:39.168 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:39.168 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:39.168 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:39.168 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:39.168 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:39.168 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:39.168 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.168 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.168 [2024-11-18 03:13:42.734948] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:39.168 [2024-11-18 03:13:42.735154] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:39.168 [2024-11-18 03:13:42.735216] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:39.168 request: 00:13:39.428 { 00:13:39.428 "base_bdev": "BaseBdev1", 00:13:39.428 "raid_bdev": "raid_bdev1", 00:13:39.428 "method": "bdev_raid_add_base_bdev", 00:13:39.428 "req_id": 1 00:13:39.428 } 00:13:39.428 Got JSON-RPC error response 00:13:39.428 response: 00:13:39.428 { 00:13:39.428 "code": -22, 00:13:39.428 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:39.428 } 00:13:39.428 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:39.428 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:13:39.428 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:39.428 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:39.428 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:39.428 03:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:40.369 03:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:40.369 03:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.369 03:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.369 03:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.369 03:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.369 03:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:40.369 03:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.369 03:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.369 03:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.369 03:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.369 03:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.369 03:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.369 03:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.369 03:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.369 03:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.369 03:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.369 "name": "raid_bdev1", 00:13:40.369 "uuid": "d64adbaf-55f8-405f-b98a-402b1e698e36", 00:13:40.369 "strip_size_kb": 0, 00:13:40.369 "state": "online", 00:13:40.369 "raid_level": "raid1", 00:13:40.369 "superblock": true, 00:13:40.369 "num_base_bdevs": 4, 00:13:40.369 "num_base_bdevs_discovered": 2, 00:13:40.369 "num_base_bdevs_operational": 2, 00:13:40.369 "base_bdevs_list": [ 00:13:40.369 { 00:13:40.369 "name": null, 00:13:40.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.369 "is_configured": false, 00:13:40.369 "data_offset": 0, 00:13:40.369 "data_size": 63488 00:13:40.369 }, 00:13:40.369 { 00:13:40.369 "name": null, 00:13:40.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.369 "is_configured": false, 00:13:40.369 "data_offset": 2048, 00:13:40.369 "data_size": 63488 00:13:40.369 }, 00:13:40.369 { 00:13:40.369 "name": "BaseBdev3", 00:13:40.369 "uuid": "38036bf2-d4f8-59ff-834e-c9ac4d669194", 00:13:40.369 "is_configured": true, 00:13:40.369 "data_offset": 2048, 00:13:40.369 "data_size": 63488 00:13:40.369 }, 00:13:40.369 { 00:13:40.369 "name": "BaseBdev4", 00:13:40.369 "uuid": "8aadc2f4-6a34-5250-b4a1-99ae1ecba68d", 00:13:40.369 "is_configured": true, 00:13:40.369 "data_offset": 2048, 00:13:40.369 "data_size": 63488 00:13:40.369 } 00:13:40.369 ] 00:13:40.369 }' 00:13:40.369 03:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.369 03:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.630 03:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:40.630 03:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.630 03:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:40.630 03:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:40.630 03:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.630 03:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.630 03:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.630 03:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.630 03:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.630 03:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.630 03:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.630 "name": "raid_bdev1", 00:13:40.630 "uuid": "d64adbaf-55f8-405f-b98a-402b1e698e36", 00:13:40.630 "strip_size_kb": 0, 00:13:40.630 "state": "online", 00:13:40.630 "raid_level": "raid1", 00:13:40.630 "superblock": true, 00:13:40.630 "num_base_bdevs": 4, 00:13:40.630 "num_base_bdevs_discovered": 2, 00:13:40.630 "num_base_bdevs_operational": 2, 00:13:40.630 "base_bdevs_list": [ 00:13:40.630 { 00:13:40.630 "name": null, 00:13:40.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.630 "is_configured": false, 00:13:40.630 "data_offset": 0, 00:13:40.630 "data_size": 63488 00:13:40.630 }, 00:13:40.630 { 00:13:40.630 "name": null, 00:13:40.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.630 "is_configured": false, 00:13:40.630 "data_offset": 2048, 00:13:40.630 "data_size": 63488 00:13:40.630 }, 00:13:40.630 { 00:13:40.630 "name": "BaseBdev3", 00:13:40.630 "uuid": "38036bf2-d4f8-59ff-834e-c9ac4d669194", 00:13:40.630 "is_configured": true, 00:13:40.630 "data_offset": 2048, 00:13:40.630 "data_size": 63488 00:13:40.630 }, 00:13:40.630 { 00:13:40.630 "name": "BaseBdev4", 00:13:40.630 "uuid": "8aadc2f4-6a34-5250-b4a1-99ae1ecba68d", 00:13:40.630 "is_configured": true, 00:13:40.630 "data_offset": 2048, 00:13:40.630 "data_size": 63488 00:13:40.630 } 00:13:40.630 ] 00:13:40.630 }' 00:13:40.630 03:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.890 03:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:40.890 03:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.890 03:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:40.890 03:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 89878 00:13:40.890 03:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 89878 ']' 00:13:40.890 03:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 89878 00:13:40.890 03:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:13:40.890 03:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:40.890 03:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89878 00:13:40.890 03:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:40.890 03:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:40.890 03:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89878' 00:13:40.890 killing process with pid 89878 00:13:40.891 03:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 89878 00:13:40.891 Received shutdown signal, test time was about 17.536573 seconds 00:13:40.891 00:13:40.891 Latency(us) 00:13:40.891 [2024-11-18T03:13:44.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:40.891 [2024-11-18T03:13:44.468Z] =================================================================================================================== 00:13:40.891 [2024-11-18T03:13:44.468Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:40.891 03:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 89878 00:13:40.891 [2024-11-18 03:13:44.315553] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:40.891 [2024-11-18 03:13:44.315691] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:40.891 [2024-11-18 03:13:44.315794] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:40.891 [2024-11-18 03:13:44.315843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:13:40.891 [2024-11-18 03:13:44.362638] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:41.151 03:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:41.151 00:13:41.151 real 0m19.523s 00:13:41.151 user 0m26.059s 00:13:41.151 sys 0m2.444s 00:13:41.151 03:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:41.151 03:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.151 ************************************ 00:13:41.151 END TEST raid_rebuild_test_sb_io 00:13:41.151 ************************************ 00:13:41.151 03:13:44 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:13:41.151 03:13:44 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:13:41.151 03:13:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:41.151 03:13:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:41.151 03:13:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:41.151 ************************************ 00:13:41.151 START TEST raid5f_state_function_test 00:13:41.151 ************************************ 00:13:41.151 03:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:13:41.151 03:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:41.151 03:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:41.151 03:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:41.151 03:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:41.151 03:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=90582 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90582' 00:13:41.152 Process raid pid: 90582 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 90582 00:13:41.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 90582 ']' 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:41.152 03:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.413 [2024-11-18 03:13:44.761554] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:41.413 [2024-11-18 03:13:44.761683] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:41.413 [2024-11-18 03:13:44.923688] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.413 [2024-11-18 03:13:44.973488] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.673 [2024-11-18 03:13:45.015726] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:41.673 [2024-11-18 03:13:45.015762] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:42.244 03:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:42.244 03:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:13:42.244 03:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:42.244 03:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.244 03:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.244 [2024-11-18 03:13:45.597034] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:42.244 [2024-11-18 03:13:45.597084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:42.244 [2024-11-18 03:13:45.597097] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:42.244 [2024-11-18 03:13:45.597106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:42.244 [2024-11-18 03:13:45.597112] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:42.244 [2024-11-18 03:13:45.597125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:42.244 03:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.244 03:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:42.244 03:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.244 03:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:42.244 03:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:42.244 03:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.244 03:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:42.244 03:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.244 03:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.244 03:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.244 03:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.244 03:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.244 03:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.244 03:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.244 03:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.244 03:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.244 03:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.244 "name": "Existed_Raid", 00:13:42.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.244 "strip_size_kb": 64, 00:13:42.244 "state": "configuring", 00:13:42.244 "raid_level": "raid5f", 00:13:42.244 "superblock": false, 00:13:42.244 "num_base_bdevs": 3, 00:13:42.244 "num_base_bdevs_discovered": 0, 00:13:42.244 "num_base_bdevs_operational": 3, 00:13:42.244 "base_bdevs_list": [ 00:13:42.244 { 00:13:42.244 "name": "BaseBdev1", 00:13:42.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.244 "is_configured": false, 00:13:42.244 "data_offset": 0, 00:13:42.244 "data_size": 0 00:13:42.244 }, 00:13:42.244 { 00:13:42.244 "name": "BaseBdev2", 00:13:42.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.244 "is_configured": false, 00:13:42.244 "data_offset": 0, 00:13:42.244 "data_size": 0 00:13:42.244 }, 00:13:42.244 { 00:13:42.244 "name": "BaseBdev3", 00:13:42.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.244 "is_configured": false, 00:13:42.244 "data_offset": 0, 00:13:42.244 "data_size": 0 00:13:42.244 } 00:13:42.244 ] 00:13:42.244 }' 00:13:42.244 03:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.244 03:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.504 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:42.504 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.504 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.765 [2024-11-18 03:13:46.080093] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:42.765 [2024-11-18 03:13:46.080181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.765 [2024-11-18 03:13:46.092106] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:42.765 [2024-11-18 03:13:46.092183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:42.765 [2024-11-18 03:13:46.092210] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:42.765 [2024-11-18 03:13:46.092232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:42.765 [2024-11-18 03:13:46.092249] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:42.765 [2024-11-18 03:13:46.092270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.765 [2024-11-18 03:13:46.112972] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:42.765 BaseBdev1 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.765 [ 00:13:42.765 { 00:13:42.765 "name": "BaseBdev1", 00:13:42.765 "aliases": [ 00:13:42.765 "92adab54-9862-4c68-be4c-1123be563adf" 00:13:42.765 ], 00:13:42.765 "product_name": "Malloc disk", 00:13:42.765 "block_size": 512, 00:13:42.765 "num_blocks": 65536, 00:13:42.765 "uuid": "92adab54-9862-4c68-be4c-1123be563adf", 00:13:42.765 "assigned_rate_limits": { 00:13:42.765 "rw_ios_per_sec": 0, 00:13:42.765 "rw_mbytes_per_sec": 0, 00:13:42.765 "r_mbytes_per_sec": 0, 00:13:42.765 "w_mbytes_per_sec": 0 00:13:42.765 }, 00:13:42.765 "claimed": true, 00:13:42.765 "claim_type": "exclusive_write", 00:13:42.765 "zoned": false, 00:13:42.765 "supported_io_types": { 00:13:42.765 "read": true, 00:13:42.765 "write": true, 00:13:42.765 "unmap": true, 00:13:42.765 "flush": true, 00:13:42.765 "reset": true, 00:13:42.765 "nvme_admin": false, 00:13:42.765 "nvme_io": false, 00:13:42.765 "nvme_io_md": false, 00:13:42.765 "write_zeroes": true, 00:13:42.765 "zcopy": true, 00:13:42.765 "get_zone_info": false, 00:13:42.765 "zone_management": false, 00:13:42.765 "zone_append": false, 00:13:42.765 "compare": false, 00:13:42.765 "compare_and_write": false, 00:13:42.765 "abort": true, 00:13:42.765 "seek_hole": false, 00:13:42.765 "seek_data": false, 00:13:42.765 "copy": true, 00:13:42.765 "nvme_iov_md": false 00:13:42.765 }, 00:13:42.765 "memory_domains": [ 00:13:42.765 { 00:13:42.765 "dma_device_id": "system", 00:13:42.765 "dma_device_type": 1 00:13:42.765 }, 00:13:42.765 { 00:13:42.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.765 "dma_device_type": 2 00:13:42.765 } 00:13:42.765 ], 00:13:42.765 "driver_specific": {} 00:13:42.765 } 00:13:42.765 ] 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.765 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.765 "name": "Existed_Raid", 00:13:42.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.765 "strip_size_kb": 64, 00:13:42.765 "state": "configuring", 00:13:42.765 "raid_level": "raid5f", 00:13:42.765 "superblock": false, 00:13:42.765 "num_base_bdevs": 3, 00:13:42.765 "num_base_bdevs_discovered": 1, 00:13:42.765 "num_base_bdevs_operational": 3, 00:13:42.765 "base_bdevs_list": [ 00:13:42.765 { 00:13:42.765 "name": "BaseBdev1", 00:13:42.765 "uuid": "92adab54-9862-4c68-be4c-1123be563adf", 00:13:42.765 "is_configured": true, 00:13:42.766 "data_offset": 0, 00:13:42.766 "data_size": 65536 00:13:42.766 }, 00:13:42.766 { 00:13:42.766 "name": "BaseBdev2", 00:13:42.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.766 "is_configured": false, 00:13:42.766 "data_offset": 0, 00:13:42.766 "data_size": 0 00:13:42.766 }, 00:13:42.766 { 00:13:42.766 "name": "BaseBdev3", 00:13:42.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.766 "is_configured": false, 00:13:42.766 "data_offset": 0, 00:13:42.766 "data_size": 0 00:13:42.766 } 00:13:42.766 ] 00:13:42.766 }' 00:13:42.766 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.766 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.026 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:43.026 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.026 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.026 [2024-11-18 03:13:46.600151] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:43.026 [2024-11-18 03:13:46.600261] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:13:43.286 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.286 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:43.286 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.286 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.286 [2024-11-18 03:13:46.608172] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:43.286 [2024-11-18 03:13:46.609954] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:43.286 [2024-11-18 03:13:46.610000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:43.286 [2024-11-18 03:13:46.610009] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:43.286 [2024-11-18 03:13:46.610019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:43.286 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.286 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:43.286 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:43.286 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:43.286 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:43.286 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.286 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:43.286 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:43.287 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:43.287 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.287 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.287 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.287 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.287 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.287 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.287 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.287 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.287 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.287 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.287 "name": "Existed_Raid", 00:13:43.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.287 "strip_size_kb": 64, 00:13:43.287 "state": "configuring", 00:13:43.287 "raid_level": "raid5f", 00:13:43.287 "superblock": false, 00:13:43.287 "num_base_bdevs": 3, 00:13:43.287 "num_base_bdevs_discovered": 1, 00:13:43.287 "num_base_bdevs_operational": 3, 00:13:43.287 "base_bdevs_list": [ 00:13:43.287 { 00:13:43.287 "name": "BaseBdev1", 00:13:43.287 "uuid": "92adab54-9862-4c68-be4c-1123be563adf", 00:13:43.287 "is_configured": true, 00:13:43.287 "data_offset": 0, 00:13:43.287 "data_size": 65536 00:13:43.287 }, 00:13:43.287 { 00:13:43.287 "name": "BaseBdev2", 00:13:43.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.287 "is_configured": false, 00:13:43.287 "data_offset": 0, 00:13:43.287 "data_size": 0 00:13:43.287 }, 00:13:43.287 { 00:13:43.287 "name": "BaseBdev3", 00:13:43.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.287 "is_configured": false, 00:13:43.287 "data_offset": 0, 00:13:43.287 "data_size": 0 00:13:43.287 } 00:13:43.287 ] 00:13:43.287 }' 00:13:43.287 03:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.287 03:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.547 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:43.547 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.547 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.547 [2024-11-18 03:13:47.029334] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:43.547 BaseBdev2 00:13:43.547 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.547 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:43.547 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:43.547 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:43.547 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:43.547 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:43.547 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:43.547 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:43.547 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.547 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.547 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.547 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:43.547 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.547 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.547 [ 00:13:43.547 { 00:13:43.547 "name": "BaseBdev2", 00:13:43.547 "aliases": [ 00:13:43.547 "b4411d9a-90ef-443f-aad0-103a59bde3a1" 00:13:43.547 ], 00:13:43.547 "product_name": "Malloc disk", 00:13:43.547 "block_size": 512, 00:13:43.547 "num_blocks": 65536, 00:13:43.547 "uuid": "b4411d9a-90ef-443f-aad0-103a59bde3a1", 00:13:43.547 "assigned_rate_limits": { 00:13:43.547 "rw_ios_per_sec": 0, 00:13:43.547 "rw_mbytes_per_sec": 0, 00:13:43.547 "r_mbytes_per_sec": 0, 00:13:43.547 "w_mbytes_per_sec": 0 00:13:43.547 }, 00:13:43.547 "claimed": true, 00:13:43.547 "claim_type": "exclusive_write", 00:13:43.547 "zoned": false, 00:13:43.547 "supported_io_types": { 00:13:43.547 "read": true, 00:13:43.547 "write": true, 00:13:43.547 "unmap": true, 00:13:43.547 "flush": true, 00:13:43.547 "reset": true, 00:13:43.547 "nvme_admin": false, 00:13:43.547 "nvme_io": false, 00:13:43.547 "nvme_io_md": false, 00:13:43.547 "write_zeroes": true, 00:13:43.547 "zcopy": true, 00:13:43.547 "get_zone_info": false, 00:13:43.547 "zone_management": false, 00:13:43.547 "zone_append": false, 00:13:43.547 "compare": false, 00:13:43.547 "compare_and_write": false, 00:13:43.547 "abort": true, 00:13:43.547 "seek_hole": false, 00:13:43.547 "seek_data": false, 00:13:43.547 "copy": true, 00:13:43.547 "nvme_iov_md": false 00:13:43.547 }, 00:13:43.547 "memory_domains": [ 00:13:43.547 { 00:13:43.547 "dma_device_id": "system", 00:13:43.547 "dma_device_type": 1 00:13:43.547 }, 00:13:43.547 { 00:13:43.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.547 "dma_device_type": 2 00:13:43.547 } 00:13:43.547 ], 00:13:43.547 "driver_specific": {} 00:13:43.547 } 00:13:43.547 ] 00:13:43.547 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.547 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:43.547 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:43.547 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:43.547 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:43.547 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:43.547 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.547 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:43.547 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:43.548 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:43.548 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.548 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.548 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.548 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.548 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.548 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.548 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.548 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.548 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.808 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.808 "name": "Existed_Raid", 00:13:43.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.808 "strip_size_kb": 64, 00:13:43.808 "state": "configuring", 00:13:43.808 "raid_level": "raid5f", 00:13:43.808 "superblock": false, 00:13:43.808 "num_base_bdevs": 3, 00:13:43.808 "num_base_bdevs_discovered": 2, 00:13:43.808 "num_base_bdevs_operational": 3, 00:13:43.808 "base_bdevs_list": [ 00:13:43.808 { 00:13:43.808 "name": "BaseBdev1", 00:13:43.808 "uuid": "92adab54-9862-4c68-be4c-1123be563adf", 00:13:43.808 "is_configured": true, 00:13:43.808 "data_offset": 0, 00:13:43.808 "data_size": 65536 00:13:43.808 }, 00:13:43.808 { 00:13:43.808 "name": "BaseBdev2", 00:13:43.808 "uuid": "b4411d9a-90ef-443f-aad0-103a59bde3a1", 00:13:43.808 "is_configured": true, 00:13:43.808 "data_offset": 0, 00:13:43.808 "data_size": 65536 00:13:43.808 }, 00:13:43.808 { 00:13:43.808 "name": "BaseBdev3", 00:13:43.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.808 "is_configured": false, 00:13:43.808 "data_offset": 0, 00:13:43.808 "data_size": 0 00:13:43.808 } 00:13:43.808 ] 00:13:43.808 }' 00:13:43.808 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.808 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.069 [2024-11-18 03:13:47.551480] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:44.069 [2024-11-18 03:13:47.551534] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:44.069 [2024-11-18 03:13:47.551544] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:44.069 [2024-11-18 03:13:47.551840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:44.069 [2024-11-18 03:13:47.552270] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:44.069 [2024-11-18 03:13:47.552282] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:13:44.069 [2024-11-18 03:13:47.552483] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.069 BaseBdev3 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.069 [ 00:13:44.069 { 00:13:44.069 "name": "BaseBdev3", 00:13:44.069 "aliases": [ 00:13:44.069 "c3e5df70-93fc-486a-b3d6-796eb8c2ccbb" 00:13:44.069 ], 00:13:44.069 "product_name": "Malloc disk", 00:13:44.069 "block_size": 512, 00:13:44.069 "num_blocks": 65536, 00:13:44.069 "uuid": "c3e5df70-93fc-486a-b3d6-796eb8c2ccbb", 00:13:44.069 "assigned_rate_limits": { 00:13:44.069 "rw_ios_per_sec": 0, 00:13:44.069 "rw_mbytes_per_sec": 0, 00:13:44.069 "r_mbytes_per_sec": 0, 00:13:44.069 "w_mbytes_per_sec": 0 00:13:44.069 }, 00:13:44.069 "claimed": true, 00:13:44.069 "claim_type": "exclusive_write", 00:13:44.069 "zoned": false, 00:13:44.069 "supported_io_types": { 00:13:44.069 "read": true, 00:13:44.069 "write": true, 00:13:44.069 "unmap": true, 00:13:44.069 "flush": true, 00:13:44.069 "reset": true, 00:13:44.069 "nvme_admin": false, 00:13:44.069 "nvme_io": false, 00:13:44.069 "nvme_io_md": false, 00:13:44.069 "write_zeroes": true, 00:13:44.069 "zcopy": true, 00:13:44.069 "get_zone_info": false, 00:13:44.069 "zone_management": false, 00:13:44.069 "zone_append": false, 00:13:44.069 "compare": false, 00:13:44.069 "compare_and_write": false, 00:13:44.069 "abort": true, 00:13:44.069 "seek_hole": false, 00:13:44.069 "seek_data": false, 00:13:44.069 "copy": true, 00:13:44.069 "nvme_iov_md": false 00:13:44.069 }, 00:13:44.069 "memory_domains": [ 00:13:44.069 { 00:13:44.069 "dma_device_id": "system", 00:13:44.069 "dma_device_type": 1 00:13:44.069 }, 00:13:44.069 { 00:13:44.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.069 "dma_device_type": 2 00:13:44.069 } 00:13:44.069 ], 00:13:44.069 "driver_specific": {} 00:13:44.069 } 00:13:44.069 ] 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.069 "name": "Existed_Raid", 00:13:44.069 "uuid": "624ec02a-31d6-40e4-ac2c-b9cad5a6c361", 00:13:44.069 "strip_size_kb": 64, 00:13:44.069 "state": "online", 00:13:44.069 "raid_level": "raid5f", 00:13:44.069 "superblock": false, 00:13:44.069 "num_base_bdevs": 3, 00:13:44.069 "num_base_bdevs_discovered": 3, 00:13:44.069 "num_base_bdevs_operational": 3, 00:13:44.069 "base_bdevs_list": [ 00:13:44.069 { 00:13:44.069 "name": "BaseBdev1", 00:13:44.069 "uuid": "92adab54-9862-4c68-be4c-1123be563adf", 00:13:44.069 "is_configured": true, 00:13:44.069 "data_offset": 0, 00:13:44.069 "data_size": 65536 00:13:44.069 }, 00:13:44.069 { 00:13:44.069 "name": "BaseBdev2", 00:13:44.069 "uuid": "b4411d9a-90ef-443f-aad0-103a59bde3a1", 00:13:44.069 "is_configured": true, 00:13:44.069 "data_offset": 0, 00:13:44.069 "data_size": 65536 00:13:44.069 }, 00:13:44.069 { 00:13:44.069 "name": "BaseBdev3", 00:13:44.069 "uuid": "c3e5df70-93fc-486a-b3d6-796eb8c2ccbb", 00:13:44.069 "is_configured": true, 00:13:44.069 "data_offset": 0, 00:13:44.069 "data_size": 65536 00:13:44.069 } 00:13:44.069 ] 00:13:44.069 }' 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.069 03:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.641 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:44.641 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:44.641 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:44.641 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:44.641 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:44.641 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:44.641 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:44.641 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:44.641 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.641 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.641 [2024-11-18 03:13:48.026909] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:44.641 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.641 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:44.641 "name": "Existed_Raid", 00:13:44.641 "aliases": [ 00:13:44.641 "624ec02a-31d6-40e4-ac2c-b9cad5a6c361" 00:13:44.641 ], 00:13:44.641 "product_name": "Raid Volume", 00:13:44.641 "block_size": 512, 00:13:44.641 "num_blocks": 131072, 00:13:44.641 "uuid": "624ec02a-31d6-40e4-ac2c-b9cad5a6c361", 00:13:44.641 "assigned_rate_limits": { 00:13:44.641 "rw_ios_per_sec": 0, 00:13:44.641 "rw_mbytes_per_sec": 0, 00:13:44.641 "r_mbytes_per_sec": 0, 00:13:44.641 "w_mbytes_per_sec": 0 00:13:44.641 }, 00:13:44.641 "claimed": false, 00:13:44.641 "zoned": false, 00:13:44.641 "supported_io_types": { 00:13:44.641 "read": true, 00:13:44.641 "write": true, 00:13:44.641 "unmap": false, 00:13:44.641 "flush": false, 00:13:44.641 "reset": true, 00:13:44.641 "nvme_admin": false, 00:13:44.641 "nvme_io": false, 00:13:44.641 "nvme_io_md": false, 00:13:44.641 "write_zeroes": true, 00:13:44.641 "zcopy": false, 00:13:44.641 "get_zone_info": false, 00:13:44.641 "zone_management": false, 00:13:44.641 "zone_append": false, 00:13:44.641 "compare": false, 00:13:44.641 "compare_and_write": false, 00:13:44.641 "abort": false, 00:13:44.641 "seek_hole": false, 00:13:44.641 "seek_data": false, 00:13:44.641 "copy": false, 00:13:44.641 "nvme_iov_md": false 00:13:44.641 }, 00:13:44.641 "driver_specific": { 00:13:44.641 "raid": { 00:13:44.641 "uuid": "624ec02a-31d6-40e4-ac2c-b9cad5a6c361", 00:13:44.641 "strip_size_kb": 64, 00:13:44.641 "state": "online", 00:13:44.641 "raid_level": "raid5f", 00:13:44.641 "superblock": false, 00:13:44.641 "num_base_bdevs": 3, 00:13:44.641 "num_base_bdevs_discovered": 3, 00:13:44.642 "num_base_bdevs_operational": 3, 00:13:44.642 "base_bdevs_list": [ 00:13:44.642 { 00:13:44.642 "name": "BaseBdev1", 00:13:44.642 "uuid": "92adab54-9862-4c68-be4c-1123be563adf", 00:13:44.642 "is_configured": true, 00:13:44.642 "data_offset": 0, 00:13:44.642 "data_size": 65536 00:13:44.642 }, 00:13:44.642 { 00:13:44.642 "name": "BaseBdev2", 00:13:44.642 "uuid": "b4411d9a-90ef-443f-aad0-103a59bde3a1", 00:13:44.642 "is_configured": true, 00:13:44.642 "data_offset": 0, 00:13:44.642 "data_size": 65536 00:13:44.642 }, 00:13:44.642 { 00:13:44.642 "name": "BaseBdev3", 00:13:44.642 "uuid": "c3e5df70-93fc-486a-b3d6-796eb8c2ccbb", 00:13:44.642 "is_configured": true, 00:13:44.642 "data_offset": 0, 00:13:44.642 "data_size": 65536 00:13:44.642 } 00:13:44.642 ] 00:13:44.642 } 00:13:44.642 } 00:13:44.642 }' 00:13:44.642 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:44.642 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:44.642 BaseBdev2 00:13:44.642 BaseBdev3' 00:13:44.642 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:44.642 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:44.642 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:44.642 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:44.642 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:44.642 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.642 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.642 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.642 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:44.642 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:44.642 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:44.642 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:44.642 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:44.642 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.642 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.642 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.902 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:44.902 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:44.902 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:44.902 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:44.902 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.902 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.902 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:44.902 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.902 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:44.902 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:44.902 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:44.902 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.902 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.902 [2024-11-18 03:13:48.294305] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:44.902 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.902 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:44.902 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:44.902 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:44.902 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:44.902 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:44.902 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:44.902 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.902 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.902 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:44.902 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.903 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:44.903 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.903 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.903 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.903 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.903 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.903 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.903 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.903 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.903 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.903 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.903 "name": "Existed_Raid", 00:13:44.903 "uuid": "624ec02a-31d6-40e4-ac2c-b9cad5a6c361", 00:13:44.903 "strip_size_kb": 64, 00:13:44.903 "state": "online", 00:13:44.903 "raid_level": "raid5f", 00:13:44.903 "superblock": false, 00:13:44.903 "num_base_bdevs": 3, 00:13:44.903 "num_base_bdevs_discovered": 2, 00:13:44.903 "num_base_bdevs_operational": 2, 00:13:44.903 "base_bdevs_list": [ 00:13:44.903 { 00:13:44.903 "name": null, 00:13:44.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.903 "is_configured": false, 00:13:44.903 "data_offset": 0, 00:13:44.903 "data_size": 65536 00:13:44.903 }, 00:13:44.903 { 00:13:44.903 "name": "BaseBdev2", 00:13:44.903 "uuid": "b4411d9a-90ef-443f-aad0-103a59bde3a1", 00:13:44.903 "is_configured": true, 00:13:44.903 "data_offset": 0, 00:13:44.903 "data_size": 65536 00:13:44.903 }, 00:13:44.903 { 00:13:44.903 "name": "BaseBdev3", 00:13:44.903 "uuid": "c3e5df70-93fc-486a-b3d6-796eb8c2ccbb", 00:13:44.903 "is_configured": true, 00:13:44.903 "data_offset": 0, 00:13:44.903 "data_size": 65536 00:13:44.903 } 00:13:44.903 ] 00:13:44.903 }' 00:13:44.903 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.903 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.474 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:45.474 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:45.474 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.475 [2024-11-18 03:13:48.800798] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:45.475 [2024-11-18 03:13:48.800951] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:45.475 [2024-11-18 03:13:48.812357] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.475 [2024-11-18 03:13:48.868346] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:45.475 [2024-11-18 03:13:48.868400] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.475 BaseBdev2 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.475 [ 00:13:45.475 { 00:13:45.475 "name": "BaseBdev2", 00:13:45.475 "aliases": [ 00:13:45.475 "8c217bb2-f5a1-4ade-aa2e-40609058adb9" 00:13:45.475 ], 00:13:45.475 "product_name": "Malloc disk", 00:13:45.475 "block_size": 512, 00:13:45.475 "num_blocks": 65536, 00:13:45.475 "uuid": "8c217bb2-f5a1-4ade-aa2e-40609058adb9", 00:13:45.475 "assigned_rate_limits": { 00:13:45.475 "rw_ios_per_sec": 0, 00:13:45.475 "rw_mbytes_per_sec": 0, 00:13:45.475 "r_mbytes_per_sec": 0, 00:13:45.475 "w_mbytes_per_sec": 0 00:13:45.475 }, 00:13:45.475 "claimed": false, 00:13:45.475 "zoned": false, 00:13:45.475 "supported_io_types": { 00:13:45.475 "read": true, 00:13:45.475 "write": true, 00:13:45.475 "unmap": true, 00:13:45.475 "flush": true, 00:13:45.475 "reset": true, 00:13:45.475 "nvme_admin": false, 00:13:45.475 "nvme_io": false, 00:13:45.475 "nvme_io_md": false, 00:13:45.475 "write_zeroes": true, 00:13:45.475 "zcopy": true, 00:13:45.475 "get_zone_info": false, 00:13:45.475 "zone_management": false, 00:13:45.475 "zone_append": false, 00:13:45.475 "compare": false, 00:13:45.475 "compare_and_write": false, 00:13:45.475 "abort": true, 00:13:45.475 "seek_hole": false, 00:13:45.475 "seek_data": false, 00:13:45.475 "copy": true, 00:13:45.475 "nvme_iov_md": false 00:13:45.475 }, 00:13:45.475 "memory_domains": [ 00:13:45.475 { 00:13:45.475 "dma_device_id": "system", 00:13:45.475 "dma_device_type": 1 00:13:45.475 }, 00:13:45.475 { 00:13:45.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.475 "dma_device_type": 2 00:13:45.475 } 00:13:45.475 ], 00:13:45.475 "driver_specific": {} 00:13:45.475 } 00:13:45.475 ] 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.475 03:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.475 BaseBdev3 00:13:45.475 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.475 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:45.475 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:45.475 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:45.475 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:45.475 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:45.475 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:45.475 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:45.475 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.475 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.475 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.475 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:45.475 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.475 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.475 [ 00:13:45.475 { 00:13:45.475 "name": "BaseBdev3", 00:13:45.475 "aliases": [ 00:13:45.475 "88bf7497-bba4-488c-9a40-7e6caa539f8c" 00:13:45.475 ], 00:13:45.475 "product_name": "Malloc disk", 00:13:45.475 "block_size": 512, 00:13:45.475 "num_blocks": 65536, 00:13:45.475 "uuid": "88bf7497-bba4-488c-9a40-7e6caa539f8c", 00:13:45.475 "assigned_rate_limits": { 00:13:45.475 "rw_ios_per_sec": 0, 00:13:45.475 "rw_mbytes_per_sec": 0, 00:13:45.475 "r_mbytes_per_sec": 0, 00:13:45.475 "w_mbytes_per_sec": 0 00:13:45.475 }, 00:13:45.475 "claimed": false, 00:13:45.475 "zoned": false, 00:13:45.475 "supported_io_types": { 00:13:45.475 "read": true, 00:13:45.475 "write": true, 00:13:45.475 "unmap": true, 00:13:45.475 "flush": true, 00:13:45.475 "reset": true, 00:13:45.475 "nvme_admin": false, 00:13:45.475 "nvme_io": false, 00:13:45.475 "nvme_io_md": false, 00:13:45.476 "write_zeroes": true, 00:13:45.476 "zcopy": true, 00:13:45.476 "get_zone_info": false, 00:13:45.476 "zone_management": false, 00:13:45.476 "zone_append": false, 00:13:45.476 "compare": false, 00:13:45.476 "compare_and_write": false, 00:13:45.476 "abort": true, 00:13:45.476 "seek_hole": false, 00:13:45.476 "seek_data": false, 00:13:45.476 "copy": true, 00:13:45.476 "nvme_iov_md": false 00:13:45.476 }, 00:13:45.476 "memory_domains": [ 00:13:45.476 { 00:13:45.476 "dma_device_id": "system", 00:13:45.476 "dma_device_type": 1 00:13:45.476 }, 00:13:45.476 { 00:13:45.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.476 "dma_device_type": 2 00:13:45.476 } 00:13:45.476 ], 00:13:45.476 "driver_specific": {} 00:13:45.476 } 00:13:45.476 ] 00:13:45.476 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.476 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:45.476 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:45.476 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:45.476 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:45.476 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.476 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.738 [2024-11-18 03:13:49.049548] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:45.738 [2024-11-18 03:13:49.049647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:45.738 [2024-11-18 03:13:49.049692] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:45.738 [2024-11-18 03:13:49.051733] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:45.738 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.738 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:45.738 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.738 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.738 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:45.738 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.738 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.738 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.738 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.738 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.738 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.738 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.738 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.738 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.738 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.738 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.739 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.739 "name": "Existed_Raid", 00:13:45.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.739 "strip_size_kb": 64, 00:13:45.739 "state": "configuring", 00:13:45.739 "raid_level": "raid5f", 00:13:45.739 "superblock": false, 00:13:45.739 "num_base_bdevs": 3, 00:13:45.739 "num_base_bdevs_discovered": 2, 00:13:45.739 "num_base_bdevs_operational": 3, 00:13:45.739 "base_bdevs_list": [ 00:13:45.739 { 00:13:45.739 "name": "BaseBdev1", 00:13:45.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.739 "is_configured": false, 00:13:45.739 "data_offset": 0, 00:13:45.739 "data_size": 0 00:13:45.739 }, 00:13:45.739 { 00:13:45.739 "name": "BaseBdev2", 00:13:45.739 "uuid": "8c217bb2-f5a1-4ade-aa2e-40609058adb9", 00:13:45.739 "is_configured": true, 00:13:45.739 "data_offset": 0, 00:13:45.739 "data_size": 65536 00:13:45.739 }, 00:13:45.739 { 00:13:45.739 "name": "BaseBdev3", 00:13:45.739 "uuid": "88bf7497-bba4-488c-9a40-7e6caa539f8c", 00:13:45.739 "is_configured": true, 00:13:45.739 "data_offset": 0, 00:13:45.739 "data_size": 65536 00:13:45.739 } 00:13:45.739 ] 00:13:45.739 }' 00:13:45.739 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.739 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.002 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:46.002 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.002 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.002 [2024-11-18 03:13:49.476810] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:46.002 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.002 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:46.002 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.002 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.002 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:46.002 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.002 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.002 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.002 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.002 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.002 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.002 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.002 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.002 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.002 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.002 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.002 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.002 "name": "Existed_Raid", 00:13:46.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.002 "strip_size_kb": 64, 00:13:46.002 "state": "configuring", 00:13:46.002 "raid_level": "raid5f", 00:13:46.002 "superblock": false, 00:13:46.002 "num_base_bdevs": 3, 00:13:46.002 "num_base_bdevs_discovered": 1, 00:13:46.002 "num_base_bdevs_operational": 3, 00:13:46.002 "base_bdevs_list": [ 00:13:46.002 { 00:13:46.002 "name": "BaseBdev1", 00:13:46.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.002 "is_configured": false, 00:13:46.002 "data_offset": 0, 00:13:46.002 "data_size": 0 00:13:46.002 }, 00:13:46.002 { 00:13:46.002 "name": null, 00:13:46.002 "uuid": "8c217bb2-f5a1-4ade-aa2e-40609058adb9", 00:13:46.002 "is_configured": false, 00:13:46.002 "data_offset": 0, 00:13:46.002 "data_size": 65536 00:13:46.002 }, 00:13:46.002 { 00:13:46.002 "name": "BaseBdev3", 00:13:46.002 "uuid": "88bf7497-bba4-488c-9a40-7e6caa539f8c", 00:13:46.002 "is_configured": true, 00:13:46.002 "data_offset": 0, 00:13:46.002 "data_size": 65536 00:13:46.002 } 00:13:46.002 ] 00:13:46.002 }' 00:13:46.002 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.002 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.572 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.572 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:46.572 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.572 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.572 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.572 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:46.572 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:46.572 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.572 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.572 [2024-11-18 03:13:49.935142] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:46.572 BaseBdev1 00:13:46.572 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.572 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:46.572 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:46.572 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:46.572 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:46.572 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:46.572 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:46.572 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:46.572 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.572 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.572 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.573 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:46.573 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.573 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.573 [ 00:13:46.573 { 00:13:46.573 "name": "BaseBdev1", 00:13:46.573 "aliases": [ 00:13:46.573 "8da8731c-1eb9-4ed2-91c4-4fc2c5a61870" 00:13:46.573 ], 00:13:46.573 "product_name": "Malloc disk", 00:13:46.573 "block_size": 512, 00:13:46.573 "num_blocks": 65536, 00:13:46.573 "uuid": "8da8731c-1eb9-4ed2-91c4-4fc2c5a61870", 00:13:46.573 "assigned_rate_limits": { 00:13:46.573 "rw_ios_per_sec": 0, 00:13:46.573 "rw_mbytes_per_sec": 0, 00:13:46.573 "r_mbytes_per_sec": 0, 00:13:46.573 "w_mbytes_per_sec": 0 00:13:46.573 }, 00:13:46.573 "claimed": true, 00:13:46.573 "claim_type": "exclusive_write", 00:13:46.573 "zoned": false, 00:13:46.573 "supported_io_types": { 00:13:46.573 "read": true, 00:13:46.573 "write": true, 00:13:46.573 "unmap": true, 00:13:46.573 "flush": true, 00:13:46.573 "reset": true, 00:13:46.573 "nvme_admin": false, 00:13:46.573 "nvme_io": false, 00:13:46.573 "nvme_io_md": false, 00:13:46.573 "write_zeroes": true, 00:13:46.573 "zcopy": true, 00:13:46.573 "get_zone_info": false, 00:13:46.573 "zone_management": false, 00:13:46.573 "zone_append": false, 00:13:46.573 "compare": false, 00:13:46.573 "compare_and_write": false, 00:13:46.573 "abort": true, 00:13:46.573 "seek_hole": false, 00:13:46.573 "seek_data": false, 00:13:46.573 "copy": true, 00:13:46.573 "nvme_iov_md": false 00:13:46.573 }, 00:13:46.573 "memory_domains": [ 00:13:46.573 { 00:13:46.573 "dma_device_id": "system", 00:13:46.573 "dma_device_type": 1 00:13:46.573 }, 00:13:46.573 { 00:13:46.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.573 "dma_device_type": 2 00:13:46.573 } 00:13:46.573 ], 00:13:46.573 "driver_specific": {} 00:13:46.573 } 00:13:46.573 ] 00:13:46.573 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.573 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:46.573 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:46.573 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.573 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.573 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:46.573 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.573 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.573 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.573 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.573 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.573 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.573 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.573 03:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.573 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.573 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.573 03:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.573 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.573 "name": "Existed_Raid", 00:13:46.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.573 "strip_size_kb": 64, 00:13:46.573 "state": "configuring", 00:13:46.573 "raid_level": "raid5f", 00:13:46.573 "superblock": false, 00:13:46.573 "num_base_bdevs": 3, 00:13:46.573 "num_base_bdevs_discovered": 2, 00:13:46.573 "num_base_bdevs_operational": 3, 00:13:46.573 "base_bdevs_list": [ 00:13:46.573 { 00:13:46.573 "name": "BaseBdev1", 00:13:46.573 "uuid": "8da8731c-1eb9-4ed2-91c4-4fc2c5a61870", 00:13:46.573 "is_configured": true, 00:13:46.573 "data_offset": 0, 00:13:46.573 "data_size": 65536 00:13:46.573 }, 00:13:46.573 { 00:13:46.573 "name": null, 00:13:46.573 "uuid": "8c217bb2-f5a1-4ade-aa2e-40609058adb9", 00:13:46.573 "is_configured": false, 00:13:46.573 "data_offset": 0, 00:13:46.573 "data_size": 65536 00:13:46.573 }, 00:13:46.573 { 00:13:46.573 "name": "BaseBdev3", 00:13:46.573 "uuid": "88bf7497-bba4-488c-9a40-7e6caa539f8c", 00:13:46.573 "is_configured": true, 00:13:46.573 "data_offset": 0, 00:13:46.573 "data_size": 65536 00:13:46.573 } 00:13:46.573 ] 00:13:46.573 }' 00:13:46.573 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.573 03:13:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.832 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.832 03:13:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.832 03:13:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.832 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:46.832 03:13:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.092 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:47.092 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:47.092 03:13:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.092 03:13:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.092 [2024-11-18 03:13:50.438346] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:47.092 03:13:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.092 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:47.092 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.092 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.092 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:47.092 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.092 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.092 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.092 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.092 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.092 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.092 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.092 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.092 03:13:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.092 03:13:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.092 03:13:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.092 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.092 "name": "Existed_Raid", 00:13:47.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.092 "strip_size_kb": 64, 00:13:47.092 "state": "configuring", 00:13:47.092 "raid_level": "raid5f", 00:13:47.092 "superblock": false, 00:13:47.092 "num_base_bdevs": 3, 00:13:47.092 "num_base_bdevs_discovered": 1, 00:13:47.092 "num_base_bdevs_operational": 3, 00:13:47.092 "base_bdevs_list": [ 00:13:47.092 { 00:13:47.092 "name": "BaseBdev1", 00:13:47.092 "uuid": "8da8731c-1eb9-4ed2-91c4-4fc2c5a61870", 00:13:47.092 "is_configured": true, 00:13:47.092 "data_offset": 0, 00:13:47.092 "data_size": 65536 00:13:47.092 }, 00:13:47.092 { 00:13:47.092 "name": null, 00:13:47.092 "uuid": "8c217bb2-f5a1-4ade-aa2e-40609058adb9", 00:13:47.092 "is_configured": false, 00:13:47.092 "data_offset": 0, 00:13:47.092 "data_size": 65536 00:13:47.092 }, 00:13:47.092 { 00:13:47.092 "name": null, 00:13:47.092 "uuid": "88bf7497-bba4-488c-9a40-7e6caa539f8c", 00:13:47.093 "is_configured": false, 00:13:47.093 "data_offset": 0, 00:13:47.093 "data_size": 65536 00:13:47.093 } 00:13:47.093 ] 00:13:47.093 }' 00:13:47.093 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.093 03:13:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.353 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:47.353 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.353 03:13:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.353 03:13:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.353 03:13:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.353 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:47.353 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:47.353 03:13:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.353 03:13:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.354 [2024-11-18 03:13:50.913582] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:47.354 03:13:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.354 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:47.354 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.354 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.354 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:47.354 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.354 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.354 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.354 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.354 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.354 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.354 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.354 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.614 03:13:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.614 03:13:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.614 03:13:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.614 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.614 "name": "Existed_Raid", 00:13:47.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.614 "strip_size_kb": 64, 00:13:47.614 "state": "configuring", 00:13:47.614 "raid_level": "raid5f", 00:13:47.614 "superblock": false, 00:13:47.614 "num_base_bdevs": 3, 00:13:47.614 "num_base_bdevs_discovered": 2, 00:13:47.614 "num_base_bdevs_operational": 3, 00:13:47.614 "base_bdevs_list": [ 00:13:47.614 { 00:13:47.614 "name": "BaseBdev1", 00:13:47.614 "uuid": "8da8731c-1eb9-4ed2-91c4-4fc2c5a61870", 00:13:47.614 "is_configured": true, 00:13:47.614 "data_offset": 0, 00:13:47.614 "data_size": 65536 00:13:47.614 }, 00:13:47.614 { 00:13:47.614 "name": null, 00:13:47.614 "uuid": "8c217bb2-f5a1-4ade-aa2e-40609058adb9", 00:13:47.614 "is_configured": false, 00:13:47.614 "data_offset": 0, 00:13:47.614 "data_size": 65536 00:13:47.614 }, 00:13:47.614 { 00:13:47.614 "name": "BaseBdev3", 00:13:47.614 "uuid": "88bf7497-bba4-488c-9a40-7e6caa539f8c", 00:13:47.614 "is_configured": true, 00:13:47.614 "data_offset": 0, 00:13:47.614 "data_size": 65536 00:13:47.614 } 00:13:47.614 ] 00:13:47.614 }' 00:13:47.614 03:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.614 03:13:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.873 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.873 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:47.873 03:13:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.873 03:13:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.873 03:13:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.873 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:47.873 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:47.873 03:13:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.874 03:13:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.874 [2024-11-18 03:13:51.340867] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:47.874 03:13:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.874 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:47.874 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.874 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.874 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:47.874 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.874 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.874 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.874 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.874 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.874 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.874 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.874 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.874 03:13:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.874 03:13:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.874 03:13:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.874 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.874 "name": "Existed_Raid", 00:13:47.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.874 "strip_size_kb": 64, 00:13:47.874 "state": "configuring", 00:13:47.874 "raid_level": "raid5f", 00:13:47.874 "superblock": false, 00:13:47.874 "num_base_bdevs": 3, 00:13:47.874 "num_base_bdevs_discovered": 1, 00:13:47.874 "num_base_bdevs_operational": 3, 00:13:47.874 "base_bdevs_list": [ 00:13:47.874 { 00:13:47.874 "name": null, 00:13:47.874 "uuid": "8da8731c-1eb9-4ed2-91c4-4fc2c5a61870", 00:13:47.874 "is_configured": false, 00:13:47.874 "data_offset": 0, 00:13:47.874 "data_size": 65536 00:13:47.874 }, 00:13:47.874 { 00:13:47.874 "name": null, 00:13:47.874 "uuid": "8c217bb2-f5a1-4ade-aa2e-40609058adb9", 00:13:47.874 "is_configured": false, 00:13:47.874 "data_offset": 0, 00:13:47.874 "data_size": 65536 00:13:47.874 }, 00:13:47.874 { 00:13:47.874 "name": "BaseBdev3", 00:13:47.874 "uuid": "88bf7497-bba4-488c-9a40-7e6caa539f8c", 00:13:47.874 "is_configured": true, 00:13:47.874 "data_offset": 0, 00:13:47.874 "data_size": 65536 00:13:47.874 } 00:13:47.874 ] 00:13:47.874 }' 00:13:47.874 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.874 03:13:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.467 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:48.467 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.467 03:13:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.467 03:13:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.467 03:13:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.467 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:48.467 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:48.467 03:13:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.467 03:13:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.467 [2024-11-18 03:13:51.822746] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:48.467 03:13:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.467 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:48.467 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.467 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:48.467 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:48.467 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.467 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:48.467 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.467 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.467 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.467 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.467 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.467 03:13:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.467 03:13:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.467 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.467 03:13:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.467 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.467 "name": "Existed_Raid", 00:13:48.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.467 "strip_size_kb": 64, 00:13:48.467 "state": "configuring", 00:13:48.467 "raid_level": "raid5f", 00:13:48.467 "superblock": false, 00:13:48.467 "num_base_bdevs": 3, 00:13:48.467 "num_base_bdevs_discovered": 2, 00:13:48.467 "num_base_bdevs_operational": 3, 00:13:48.467 "base_bdevs_list": [ 00:13:48.467 { 00:13:48.467 "name": null, 00:13:48.467 "uuid": "8da8731c-1eb9-4ed2-91c4-4fc2c5a61870", 00:13:48.467 "is_configured": false, 00:13:48.467 "data_offset": 0, 00:13:48.467 "data_size": 65536 00:13:48.467 }, 00:13:48.467 { 00:13:48.467 "name": "BaseBdev2", 00:13:48.467 "uuid": "8c217bb2-f5a1-4ade-aa2e-40609058adb9", 00:13:48.467 "is_configured": true, 00:13:48.467 "data_offset": 0, 00:13:48.467 "data_size": 65536 00:13:48.467 }, 00:13:48.467 { 00:13:48.467 "name": "BaseBdev3", 00:13:48.467 "uuid": "88bf7497-bba4-488c-9a40-7e6caa539f8c", 00:13:48.467 "is_configured": true, 00:13:48.467 "data_offset": 0, 00:13:48.467 "data_size": 65536 00:13:48.467 } 00:13:48.467 ] 00:13:48.467 }' 00:13:48.467 03:13:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.468 03:13:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.738 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:48.738 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.738 03:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.738 03:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8da8731c-1eb9-4ed2-91c4-4fc2c5a61870 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.999 [2024-11-18 03:13:52.408918] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:48.999 [2024-11-18 03:13:52.408964] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:48.999 [2024-11-18 03:13:52.408985] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:48.999 [2024-11-18 03:13:52.409258] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:48.999 [2024-11-18 03:13:52.409729] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:48.999 [2024-11-18 03:13:52.409741] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:13:48.999 [2024-11-18 03:13:52.409944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.999 NewBaseBdev 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.999 [ 00:13:48.999 { 00:13:48.999 "name": "NewBaseBdev", 00:13:48.999 "aliases": [ 00:13:48.999 "8da8731c-1eb9-4ed2-91c4-4fc2c5a61870" 00:13:48.999 ], 00:13:48.999 "product_name": "Malloc disk", 00:13:48.999 "block_size": 512, 00:13:48.999 "num_blocks": 65536, 00:13:48.999 "uuid": "8da8731c-1eb9-4ed2-91c4-4fc2c5a61870", 00:13:48.999 "assigned_rate_limits": { 00:13:48.999 "rw_ios_per_sec": 0, 00:13:48.999 "rw_mbytes_per_sec": 0, 00:13:48.999 "r_mbytes_per_sec": 0, 00:13:48.999 "w_mbytes_per_sec": 0 00:13:48.999 }, 00:13:48.999 "claimed": true, 00:13:48.999 "claim_type": "exclusive_write", 00:13:48.999 "zoned": false, 00:13:48.999 "supported_io_types": { 00:13:48.999 "read": true, 00:13:48.999 "write": true, 00:13:48.999 "unmap": true, 00:13:48.999 "flush": true, 00:13:48.999 "reset": true, 00:13:48.999 "nvme_admin": false, 00:13:48.999 "nvme_io": false, 00:13:48.999 "nvme_io_md": false, 00:13:48.999 "write_zeroes": true, 00:13:48.999 "zcopy": true, 00:13:48.999 "get_zone_info": false, 00:13:48.999 "zone_management": false, 00:13:48.999 "zone_append": false, 00:13:48.999 "compare": false, 00:13:48.999 "compare_and_write": false, 00:13:48.999 "abort": true, 00:13:48.999 "seek_hole": false, 00:13:48.999 "seek_data": false, 00:13:48.999 "copy": true, 00:13:48.999 "nvme_iov_md": false 00:13:48.999 }, 00:13:48.999 "memory_domains": [ 00:13:48.999 { 00:13:48.999 "dma_device_id": "system", 00:13:48.999 "dma_device_type": 1 00:13:48.999 }, 00:13:48.999 { 00:13:48.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.999 "dma_device_type": 2 00:13:48.999 } 00:13:48.999 ], 00:13:48.999 "driver_specific": {} 00:13:48.999 } 00:13:48.999 ] 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.999 "name": "Existed_Raid", 00:13:48.999 "uuid": "70bfdc5f-3ad4-494c-84a8-c3c2601b5989", 00:13:48.999 "strip_size_kb": 64, 00:13:48.999 "state": "online", 00:13:48.999 "raid_level": "raid5f", 00:13:48.999 "superblock": false, 00:13:48.999 "num_base_bdevs": 3, 00:13:48.999 "num_base_bdevs_discovered": 3, 00:13:48.999 "num_base_bdevs_operational": 3, 00:13:48.999 "base_bdevs_list": [ 00:13:48.999 { 00:13:48.999 "name": "NewBaseBdev", 00:13:48.999 "uuid": "8da8731c-1eb9-4ed2-91c4-4fc2c5a61870", 00:13:48.999 "is_configured": true, 00:13:48.999 "data_offset": 0, 00:13:48.999 "data_size": 65536 00:13:48.999 }, 00:13:48.999 { 00:13:48.999 "name": "BaseBdev2", 00:13:48.999 "uuid": "8c217bb2-f5a1-4ade-aa2e-40609058adb9", 00:13:48.999 "is_configured": true, 00:13:48.999 "data_offset": 0, 00:13:48.999 "data_size": 65536 00:13:48.999 }, 00:13:48.999 { 00:13:48.999 "name": "BaseBdev3", 00:13:48.999 "uuid": "88bf7497-bba4-488c-9a40-7e6caa539f8c", 00:13:48.999 "is_configured": true, 00:13:48.999 "data_offset": 0, 00:13:48.999 "data_size": 65536 00:13:48.999 } 00:13:48.999 ] 00:13:48.999 }' 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.999 03:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.570 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:49.570 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:49.570 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:49.570 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:49.570 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:49.570 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:49.570 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:49.570 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:49.570 03:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.570 03:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.570 [2024-11-18 03:13:52.900319] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:49.570 03:13:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.570 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:49.570 "name": "Existed_Raid", 00:13:49.570 "aliases": [ 00:13:49.570 "70bfdc5f-3ad4-494c-84a8-c3c2601b5989" 00:13:49.570 ], 00:13:49.570 "product_name": "Raid Volume", 00:13:49.570 "block_size": 512, 00:13:49.570 "num_blocks": 131072, 00:13:49.570 "uuid": "70bfdc5f-3ad4-494c-84a8-c3c2601b5989", 00:13:49.570 "assigned_rate_limits": { 00:13:49.570 "rw_ios_per_sec": 0, 00:13:49.570 "rw_mbytes_per_sec": 0, 00:13:49.570 "r_mbytes_per_sec": 0, 00:13:49.570 "w_mbytes_per_sec": 0 00:13:49.570 }, 00:13:49.570 "claimed": false, 00:13:49.570 "zoned": false, 00:13:49.570 "supported_io_types": { 00:13:49.570 "read": true, 00:13:49.570 "write": true, 00:13:49.570 "unmap": false, 00:13:49.570 "flush": false, 00:13:49.570 "reset": true, 00:13:49.570 "nvme_admin": false, 00:13:49.570 "nvme_io": false, 00:13:49.570 "nvme_io_md": false, 00:13:49.570 "write_zeroes": true, 00:13:49.570 "zcopy": false, 00:13:49.570 "get_zone_info": false, 00:13:49.570 "zone_management": false, 00:13:49.570 "zone_append": false, 00:13:49.570 "compare": false, 00:13:49.570 "compare_and_write": false, 00:13:49.570 "abort": false, 00:13:49.570 "seek_hole": false, 00:13:49.570 "seek_data": false, 00:13:49.570 "copy": false, 00:13:49.570 "nvme_iov_md": false 00:13:49.570 }, 00:13:49.570 "driver_specific": { 00:13:49.570 "raid": { 00:13:49.570 "uuid": "70bfdc5f-3ad4-494c-84a8-c3c2601b5989", 00:13:49.570 "strip_size_kb": 64, 00:13:49.571 "state": "online", 00:13:49.571 "raid_level": "raid5f", 00:13:49.571 "superblock": false, 00:13:49.571 "num_base_bdevs": 3, 00:13:49.571 "num_base_bdevs_discovered": 3, 00:13:49.571 "num_base_bdevs_operational": 3, 00:13:49.571 "base_bdevs_list": [ 00:13:49.571 { 00:13:49.571 "name": "NewBaseBdev", 00:13:49.571 "uuid": "8da8731c-1eb9-4ed2-91c4-4fc2c5a61870", 00:13:49.571 "is_configured": true, 00:13:49.571 "data_offset": 0, 00:13:49.571 "data_size": 65536 00:13:49.571 }, 00:13:49.571 { 00:13:49.571 "name": "BaseBdev2", 00:13:49.571 "uuid": "8c217bb2-f5a1-4ade-aa2e-40609058adb9", 00:13:49.571 "is_configured": true, 00:13:49.571 "data_offset": 0, 00:13:49.571 "data_size": 65536 00:13:49.571 }, 00:13:49.571 { 00:13:49.571 "name": "BaseBdev3", 00:13:49.571 "uuid": "88bf7497-bba4-488c-9a40-7e6caa539f8c", 00:13:49.571 "is_configured": true, 00:13:49.571 "data_offset": 0, 00:13:49.571 "data_size": 65536 00:13:49.571 } 00:13:49.571 ] 00:13:49.571 } 00:13:49.571 } 00:13:49.571 }' 00:13:49.571 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:49.571 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:49.571 BaseBdev2 00:13:49.571 BaseBdev3' 00:13:49.571 03:13:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.571 03:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:49.571 03:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.571 03:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.571 03:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:49.571 03:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.571 03:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.571 03:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.571 03:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.571 03:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.571 03:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.571 03:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:49.571 03:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.571 03:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.571 03:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.571 03:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.571 03:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.571 03:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.571 03:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.571 03:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.571 03:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:49.571 03:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.571 03:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.831 03:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.831 03:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.831 03:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.831 03:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:49.831 03:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.831 03:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.831 [2024-11-18 03:13:53.179645] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:49.831 [2024-11-18 03:13:53.179672] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:49.831 [2024-11-18 03:13:53.179760] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.831 [2024-11-18 03:13:53.180026] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:49.831 [2024-11-18 03:13:53.180057] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:13:49.831 03:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.831 03:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 90582 00:13:49.831 03:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 90582 ']' 00:13:49.831 03:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 90582 00:13:49.831 03:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:13:49.831 03:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:49.831 03:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90582 00:13:49.831 03:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:49.831 killing process with pid 90582 00:13:49.831 03:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:49.831 03:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90582' 00:13:49.831 03:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 90582 00:13:49.831 [2024-11-18 03:13:53.227433] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:49.831 03:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 90582 00:13:49.831 [2024-11-18 03:13:53.259078] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:50.092 00:13:50.092 real 0m8.835s 00:13:50.092 user 0m15.076s 00:13:50.092 sys 0m1.830s 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:50.092 ************************************ 00:13:50.092 END TEST raid5f_state_function_test 00:13:50.092 ************************************ 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.092 03:13:53 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:13:50.092 03:13:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:50.092 03:13:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:50.092 03:13:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:50.092 ************************************ 00:13:50.092 START TEST raid5f_state_function_test_sb 00:13:50.092 ************************************ 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=91192 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 91192' 00:13:50.092 Process raid pid: 91192 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 91192 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 91192 ']' 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:50.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.092 03:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:50.093 03:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.353 [2024-11-18 03:13:53.671642] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:50.353 [2024-11-18 03:13:53.672258] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.353 [2024-11-18 03:13:53.835185] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.353 [2024-11-18 03:13:53.885371] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.612 [2024-11-18 03:13:53.927486] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:50.612 [2024-11-18 03:13:53.927537] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.183 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:51.183 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:51.183 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:51.183 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.183 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.183 [2024-11-18 03:13:54.512867] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:51.183 [2024-11-18 03:13:54.512919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:51.183 [2024-11-18 03:13:54.512933] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:51.183 [2024-11-18 03:13:54.512943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:51.183 [2024-11-18 03:13:54.512949] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:51.183 [2024-11-18 03:13:54.512969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:51.183 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.183 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:51.183 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.183 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.183 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:51.183 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.183 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.183 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.183 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.183 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.183 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.183 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.183 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.183 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.183 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.183 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.183 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.183 "name": "Existed_Raid", 00:13:51.183 "uuid": "fc784754-761d-4ea4-a9ed-a1c7387a45ef", 00:13:51.183 "strip_size_kb": 64, 00:13:51.183 "state": "configuring", 00:13:51.183 "raid_level": "raid5f", 00:13:51.183 "superblock": true, 00:13:51.183 "num_base_bdevs": 3, 00:13:51.183 "num_base_bdevs_discovered": 0, 00:13:51.183 "num_base_bdevs_operational": 3, 00:13:51.183 "base_bdevs_list": [ 00:13:51.183 { 00:13:51.183 "name": "BaseBdev1", 00:13:51.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.183 "is_configured": false, 00:13:51.183 "data_offset": 0, 00:13:51.183 "data_size": 0 00:13:51.183 }, 00:13:51.183 { 00:13:51.183 "name": "BaseBdev2", 00:13:51.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.183 "is_configured": false, 00:13:51.183 "data_offset": 0, 00:13:51.183 "data_size": 0 00:13:51.183 }, 00:13:51.183 { 00:13:51.183 "name": "BaseBdev3", 00:13:51.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.183 "is_configured": false, 00:13:51.183 "data_offset": 0, 00:13:51.183 "data_size": 0 00:13:51.183 } 00:13:51.183 ] 00:13:51.183 }' 00:13:51.183 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.183 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.443 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:51.443 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.443 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.443 [2024-11-18 03:13:54.920087] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:51.443 [2024-11-18 03:13:54.920191] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.444 [2024-11-18 03:13:54.928131] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:51.444 [2024-11-18 03:13:54.928228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:51.444 [2024-11-18 03:13:54.928256] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:51.444 [2024-11-18 03:13:54.928279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:51.444 [2024-11-18 03:13:54.928298] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:51.444 [2024-11-18 03:13:54.928319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.444 [2024-11-18 03:13:54.945149] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:51.444 BaseBdev1 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.444 [ 00:13:51.444 { 00:13:51.444 "name": "BaseBdev1", 00:13:51.444 "aliases": [ 00:13:51.444 "81937164-555a-4423-aa60-f87efb009c9e" 00:13:51.444 ], 00:13:51.444 "product_name": "Malloc disk", 00:13:51.444 "block_size": 512, 00:13:51.444 "num_blocks": 65536, 00:13:51.444 "uuid": "81937164-555a-4423-aa60-f87efb009c9e", 00:13:51.444 "assigned_rate_limits": { 00:13:51.444 "rw_ios_per_sec": 0, 00:13:51.444 "rw_mbytes_per_sec": 0, 00:13:51.444 "r_mbytes_per_sec": 0, 00:13:51.444 "w_mbytes_per_sec": 0 00:13:51.444 }, 00:13:51.444 "claimed": true, 00:13:51.444 "claim_type": "exclusive_write", 00:13:51.444 "zoned": false, 00:13:51.444 "supported_io_types": { 00:13:51.444 "read": true, 00:13:51.444 "write": true, 00:13:51.444 "unmap": true, 00:13:51.444 "flush": true, 00:13:51.444 "reset": true, 00:13:51.444 "nvme_admin": false, 00:13:51.444 "nvme_io": false, 00:13:51.444 "nvme_io_md": false, 00:13:51.444 "write_zeroes": true, 00:13:51.444 "zcopy": true, 00:13:51.444 "get_zone_info": false, 00:13:51.444 "zone_management": false, 00:13:51.444 "zone_append": false, 00:13:51.444 "compare": false, 00:13:51.444 "compare_and_write": false, 00:13:51.444 "abort": true, 00:13:51.444 "seek_hole": false, 00:13:51.444 "seek_data": false, 00:13:51.444 "copy": true, 00:13:51.444 "nvme_iov_md": false 00:13:51.444 }, 00:13:51.444 "memory_domains": [ 00:13:51.444 { 00:13:51.444 "dma_device_id": "system", 00:13:51.444 "dma_device_type": 1 00:13:51.444 }, 00:13:51.444 { 00:13:51.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.444 "dma_device_type": 2 00:13:51.444 } 00:13:51.444 ], 00:13:51.444 "driver_specific": {} 00:13:51.444 } 00:13:51.444 ] 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.444 03:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.444 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.704 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.704 "name": "Existed_Raid", 00:13:51.704 "uuid": "645a687c-2c9a-45c2-b8b7-22609649465f", 00:13:51.704 "strip_size_kb": 64, 00:13:51.704 "state": "configuring", 00:13:51.704 "raid_level": "raid5f", 00:13:51.704 "superblock": true, 00:13:51.704 "num_base_bdevs": 3, 00:13:51.704 "num_base_bdevs_discovered": 1, 00:13:51.704 "num_base_bdevs_operational": 3, 00:13:51.704 "base_bdevs_list": [ 00:13:51.704 { 00:13:51.704 "name": "BaseBdev1", 00:13:51.704 "uuid": "81937164-555a-4423-aa60-f87efb009c9e", 00:13:51.704 "is_configured": true, 00:13:51.705 "data_offset": 2048, 00:13:51.705 "data_size": 63488 00:13:51.705 }, 00:13:51.705 { 00:13:51.705 "name": "BaseBdev2", 00:13:51.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.705 "is_configured": false, 00:13:51.705 "data_offset": 0, 00:13:51.705 "data_size": 0 00:13:51.705 }, 00:13:51.705 { 00:13:51.705 "name": "BaseBdev3", 00:13:51.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.705 "is_configured": false, 00:13:51.705 "data_offset": 0, 00:13:51.705 "data_size": 0 00:13:51.705 } 00:13:51.705 ] 00:13:51.705 }' 00:13:51.705 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.705 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.965 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:51.965 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.965 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.965 [2024-11-18 03:13:55.436401] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:51.965 [2024-11-18 03:13:55.436548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:13:51.965 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.965 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:51.965 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.965 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.965 [2024-11-18 03:13:55.448388] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:51.965 [2024-11-18 03:13:55.450353] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:51.965 [2024-11-18 03:13:55.450400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:51.965 [2024-11-18 03:13:55.450409] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:51.965 [2024-11-18 03:13:55.450432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:51.965 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.965 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:51.965 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:51.965 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:51.965 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.965 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.965 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:51.965 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.965 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.965 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.965 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.966 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.966 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.966 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.966 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.966 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.966 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.966 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.966 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.966 "name": "Existed_Raid", 00:13:51.966 "uuid": "a2ba8100-1cb7-4ac2-809d-60ee9b7c98e4", 00:13:51.966 "strip_size_kb": 64, 00:13:51.966 "state": "configuring", 00:13:51.966 "raid_level": "raid5f", 00:13:51.966 "superblock": true, 00:13:51.966 "num_base_bdevs": 3, 00:13:51.966 "num_base_bdevs_discovered": 1, 00:13:51.966 "num_base_bdevs_operational": 3, 00:13:51.966 "base_bdevs_list": [ 00:13:51.966 { 00:13:51.966 "name": "BaseBdev1", 00:13:51.966 "uuid": "81937164-555a-4423-aa60-f87efb009c9e", 00:13:51.966 "is_configured": true, 00:13:51.966 "data_offset": 2048, 00:13:51.966 "data_size": 63488 00:13:51.966 }, 00:13:51.966 { 00:13:51.966 "name": "BaseBdev2", 00:13:51.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.966 "is_configured": false, 00:13:51.966 "data_offset": 0, 00:13:51.966 "data_size": 0 00:13:51.966 }, 00:13:51.966 { 00:13:51.966 "name": "BaseBdev3", 00:13:51.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.966 "is_configured": false, 00:13:51.966 "data_offset": 0, 00:13:51.966 "data_size": 0 00:13:51.966 } 00:13:51.966 ] 00:13:51.966 }' 00:13:51.966 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.966 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.536 [2024-11-18 03:13:55.949663] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:52.536 BaseBdev2 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.536 [ 00:13:52.536 { 00:13:52.536 "name": "BaseBdev2", 00:13:52.536 "aliases": [ 00:13:52.536 "c1ef316a-7079-405e-be1f-36aa158491e2" 00:13:52.536 ], 00:13:52.536 "product_name": "Malloc disk", 00:13:52.536 "block_size": 512, 00:13:52.536 "num_blocks": 65536, 00:13:52.536 "uuid": "c1ef316a-7079-405e-be1f-36aa158491e2", 00:13:52.536 "assigned_rate_limits": { 00:13:52.536 "rw_ios_per_sec": 0, 00:13:52.536 "rw_mbytes_per_sec": 0, 00:13:52.536 "r_mbytes_per_sec": 0, 00:13:52.536 "w_mbytes_per_sec": 0 00:13:52.536 }, 00:13:52.536 "claimed": true, 00:13:52.536 "claim_type": "exclusive_write", 00:13:52.536 "zoned": false, 00:13:52.536 "supported_io_types": { 00:13:52.536 "read": true, 00:13:52.536 "write": true, 00:13:52.536 "unmap": true, 00:13:52.536 "flush": true, 00:13:52.536 "reset": true, 00:13:52.536 "nvme_admin": false, 00:13:52.536 "nvme_io": false, 00:13:52.536 "nvme_io_md": false, 00:13:52.536 "write_zeroes": true, 00:13:52.536 "zcopy": true, 00:13:52.536 "get_zone_info": false, 00:13:52.536 "zone_management": false, 00:13:52.536 "zone_append": false, 00:13:52.536 "compare": false, 00:13:52.536 "compare_and_write": false, 00:13:52.536 "abort": true, 00:13:52.536 "seek_hole": false, 00:13:52.536 "seek_data": false, 00:13:52.536 "copy": true, 00:13:52.536 "nvme_iov_md": false 00:13:52.536 }, 00:13:52.536 "memory_domains": [ 00:13:52.536 { 00:13:52.536 "dma_device_id": "system", 00:13:52.536 "dma_device_type": 1 00:13:52.536 }, 00:13:52.536 { 00:13:52.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.536 "dma_device_type": 2 00:13:52.536 } 00:13:52.536 ], 00:13:52.536 "driver_specific": {} 00:13:52.536 } 00:13:52.536 ] 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.536 03:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.536 03:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.536 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.536 "name": "Existed_Raid", 00:13:52.536 "uuid": "a2ba8100-1cb7-4ac2-809d-60ee9b7c98e4", 00:13:52.536 "strip_size_kb": 64, 00:13:52.536 "state": "configuring", 00:13:52.536 "raid_level": "raid5f", 00:13:52.536 "superblock": true, 00:13:52.536 "num_base_bdevs": 3, 00:13:52.536 "num_base_bdevs_discovered": 2, 00:13:52.536 "num_base_bdevs_operational": 3, 00:13:52.536 "base_bdevs_list": [ 00:13:52.536 { 00:13:52.536 "name": "BaseBdev1", 00:13:52.536 "uuid": "81937164-555a-4423-aa60-f87efb009c9e", 00:13:52.536 "is_configured": true, 00:13:52.536 "data_offset": 2048, 00:13:52.536 "data_size": 63488 00:13:52.536 }, 00:13:52.536 { 00:13:52.537 "name": "BaseBdev2", 00:13:52.537 "uuid": "c1ef316a-7079-405e-be1f-36aa158491e2", 00:13:52.537 "is_configured": true, 00:13:52.537 "data_offset": 2048, 00:13:52.537 "data_size": 63488 00:13:52.537 }, 00:13:52.537 { 00:13:52.537 "name": "BaseBdev3", 00:13:52.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.537 "is_configured": false, 00:13:52.537 "data_offset": 0, 00:13:52.537 "data_size": 0 00:13:52.537 } 00:13:52.537 ] 00:13:52.537 }' 00:13:52.537 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.537 03:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.108 [2024-11-18 03:13:56.432066] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:53.108 [2024-11-18 03:13:56.432301] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:53.108 [2024-11-18 03:13:56.432326] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:53.108 [2024-11-18 03:13:56.432617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:53.108 BaseBdev3 00:13:53.108 [2024-11-18 03:13:56.433066] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:53.108 [2024-11-18 03:13:56.433086] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:13:53.108 [2024-11-18 03:13:56.433205] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.108 [ 00:13:53.108 { 00:13:53.108 "name": "BaseBdev3", 00:13:53.108 "aliases": [ 00:13:53.108 "1d129b41-5132-4673-a2a7-c0e5758fd439" 00:13:53.108 ], 00:13:53.108 "product_name": "Malloc disk", 00:13:53.108 "block_size": 512, 00:13:53.108 "num_blocks": 65536, 00:13:53.108 "uuid": "1d129b41-5132-4673-a2a7-c0e5758fd439", 00:13:53.108 "assigned_rate_limits": { 00:13:53.108 "rw_ios_per_sec": 0, 00:13:53.108 "rw_mbytes_per_sec": 0, 00:13:53.108 "r_mbytes_per_sec": 0, 00:13:53.108 "w_mbytes_per_sec": 0 00:13:53.108 }, 00:13:53.108 "claimed": true, 00:13:53.108 "claim_type": "exclusive_write", 00:13:53.108 "zoned": false, 00:13:53.108 "supported_io_types": { 00:13:53.108 "read": true, 00:13:53.108 "write": true, 00:13:53.108 "unmap": true, 00:13:53.108 "flush": true, 00:13:53.108 "reset": true, 00:13:53.108 "nvme_admin": false, 00:13:53.108 "nvme_io": false, 00:13:53.108 "nvme_io_md": false, 00:13:53.108 "write_zeroes": true, 00:13:53.108 "zcopy": true, 00:13:53.108 "get_zone_info": false, 00:13:53.108 "zone_management": false, 00:13:53.108 "zone_append": false, 00:13:53.108 "compare": false, 00:13:53.108 "compare_and_write": false, 00:13:53.108 "abort": true, 00:13:53.108 "seek_hole": false, 00:13:53.108 "seek_data": false, 00:13:53.108 "copy": true, 00:13:53.108 "nvme_iov_md": false 00:13:53.108 }, 00:13:53.108 "memory_domains": [ 00:13:53.108 { 00:13:53.108 "dma_device_id": "system", 00:13:53.108 "dma_device_type": 1 00:13:53.108 }, 00:13:53.108 { 00:13:53.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.108 "dma_device_type": 2 00:13:53.108 } 00:13:53.108 ], 00:13:53.108 "driver_specific": {} 00:13:53.108 } 00:13:53.108 ] 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.108 "name": "Existed_Raid", 00:13:53.108 "uuid": "a2ba8100-1cb7-4ac2-809d-60ee9b7c98e4", 00:13:53.108 "strip_size_kb": 64, 00:13:53.108 "state": "online", 00:13:53.108 "raid_level": "raid5f", 00:13:53.108 "superblock": true, 00:13:53.108 "num_base_bdevs": 3, 00:13:53.108 "num_base_bdevs_discovered": 3, 00:13:53.108 "num_base_bdevs_operational": 3, 00:13:53.108 "base_bdevs_list": [ 00:13:53.108 { 00:13:53.108 "name": "BaseBdev1", 00:13:53.108 "uuid": "81937164-555a-4423-aa60-f87efb009c9e", 00:13:53.108 "is_configured": true, 00:13:53.108 "data_offset": 2048, 00:13:53.108 "data_size": 63488 00:13:53.108 }, 00:13:53.108 { 00:13:53.108 "name": "BaseBdev2", 00:13:53.108 "uuid": "c1ef316a-7079-405e-be1f-36aa158491e2", 00:13:53.108 "is_configured": true, 00:13:53.108 "data_offset": 2048, 00:13:53.108 "data_size": 63488 00:13:53.108 }, 00:13:53.108 { 00:13:53.108 "name": "BaseBdev3", 00:13:53.108 "uuid": "1d129b41-5132-4673-a2a7-c0e5758fd439", 00:13:53.108 "is_configured": true, 00:13:53.108 "data_offset": 2048, 00:13:53.108 "data_size": 63488 00:13:53.108 } 00:13:53.108 ] 00:13:53.108 }' 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.108 03:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.369 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:53.369 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:53.369 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:53.369 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:53.369 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:53.369 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:53.369 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:53.369 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:53.369 03:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.369 03:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.369 [2024-11-18 03:13:56.927478] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:53.629 03:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.629 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:53.629 "name": "Existed_Raid", 00:13:53.629 "aliases": [ 00:13:53.629 "a2ba8100-1cb7-4ac2-809d-60ee9b7c98e4" 00:13:53.629 ], 00:13:53.629 "product_name": "Raid Volume", 00:13:53.629 "block_size": 512, 00:13:53.629 "num_blocks": 126976, 00:13:53.629 "uuid": "a2ba8100-1cb7-4ac2-809d-60ee9b7c98e4", 00:13:53.629 "assigned_rate_limits": { 00:13:53.629 "rw_ios_per_sec": 0, 00:13:53.629 "rw_mbytes_per_sec": 0, 00:13:53.629 "r_mbytes_per_sec": 0, 00:13:53.629 "w_mbytes_per_sec": 0 00:13:53.629 }, 00:13:53.629 "claimed": false, 00:13:53.629 "zoned": false, 00:13:53.629 "supported_io_types": { 00:13:53.629 "read": true, 00:13:53.629 "write": true, 00:13:53.629 "unmap": false, 00:13:53.629 "flush": false, 00:13:53.629 "reset": true, 00:13:53.629 "nvme_admin": false, 00:13:53.629 "nvme_io": false, 00:13:53.629 "nvme_io_md": false, 00:13:53.630 "write_zeroes": true, 00:13:53.630 "zcopy": false, 00:13:53.630 "get_zone_info": false, 00:13:53.630 "zone_management": false, 00:13:53.630 "zone_append": false, 00:13:53.630 "compare": false, 00:13:53.630 "compare_and_write": false, 00:13:53.630 "abort": false, 00:13:53.630 "seek_hole": false, 00:13:53.630 "seek_data": false, 00:13:53.630 "copy": false, 00:13:53.630 "nvme_iov_md": false 00:13:53.630 }, 00:13:53.630 "driver_specific": { 00:13:53.630 "raid": { 00:13:53.630 "uuid": "a2ba8100-1cb7-4ac2-809d-60ee9b7c98e4", 00:13:53.630 "strip_size_kb": 64, 00:13:53.630 "state": "online", 00:13:53.630 "raid_level": "raid5f", 00:13:53.630 "superblock": true, 00:13:53.630 "num_base_bdevs": 3, 00:13:53.630 "num_base_bdevs_discovered": 3, 00:13:53.630 "num_base_bdevs_operational": 3, 00:13:53.630 "base_bdevs_list": [ 00:13:53.630 { 00:13:53.630 "name": "BaseBdev1", 00:13:53.630 "uuid": "81937164-555a-4423-aa60-f87efb009c9e", 00:13:53.630 "is_configured": true, 00:13:53.630 "data_offset": 2048, 00:13:53.630 "data_size": 63488 00:13:53.630 }, 00:13:53.630 { 00:13:53.630 "name": "BaseBdev2", 00:13:53.630 "uuid": "c1ef316a-7079-405e-be1f-36aa158491e2", 00:13:53.630 "is_configured": true, 00:13:53.630 "data_offset": 2048, 00:13:53.630 "data_size": 63488 00:13:53.630 }, 00:13:53.630 { 00:13:53.630 "name": "BaseBdev3", 00:13:53.630 "uuid": "1d129b41-5132-4673-a2a7-c0e5758fd439", 00:13:53.630 "is_configured": true, 00:13:53.630 "data_offset": 2048, 00:13:53.630 "data_size": 63488 00:13:53.630 } 00:13:53.630 ] 00:13:53.630 } 00:13:53.630 } 00:13:53.630 }' 00:13:53.630 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:53.630 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:53.630 BaseBdev2 00:13:53.630 BaseBdev3' 00:13:53.630 03:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.630 [2024-11-18 03:13:57.162983] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.630 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.891 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.891 "name": "Existed_Raid", 00:13:53.891 "uuid": "a2ba8100-1cb7-4ac2-809d-60ee9b7c98e4", 00:13:53.891 "strip_size_kb": 64, 00:13:53.891 "state": "online", 00:13:53.891 "raid_level": "raid5f", 00:13:53.891 "superblock": true, 00:13:53.891 "num_base_bdevs": 3, 00:13:53.891 "num_base_bdevs_discovered": 2, 00:13:53.891 "num_base_bdevs_operational": 2, 00:13:53.891 "base_bdevs_list": [ 00:13:53.891 { 00:13:53.891 "name": null, 00:13:53.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.891 "is_configured": false, 00:13:53.891 "data_offset": 0, 00:13:53.891 "data_size": 63488 00:13:53.891 }, 00:13:53.891 { 00:13:53.891 "name": "BaseBdev2", 00:13:53.891 "uuid": "c1ef316a-7079-405e-be1f-36aa158491e2", 00:13:53.891 "is_configured": true, 00:13:53.891 "data_offset": 2048, 00:13:53.891 "data_size": 63488 00:13:53.891 }, 00:13:53.891 { 00:13:53.891 "name": "BaseBdev3", 00:13:53.891 "uuid": "1d129b41-5132-4673-a2a7-c0e5758fd439", 00:13:53.891 "is_configured": true, 00:13:53.891 "data_offset": 2048, 00:13:53.891 "data_size": 63488 00:13:53.891 } 00:13:53.891 ] 00:13:53.891 }' 00:13:53.891 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.891 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.152 [2024-11-18 03:13:57.625703] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:54.152 [2024-11-18 03:13:57.625849] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:54.152 [2024-11-18 03:13:57.637102] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.152 [2024-11-18 03:13:57.693084] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:54.152 [2024-11-18 03:13:57.693141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.152 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.413 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:54.413 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:54.413 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:54.413 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:54.413 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:54.413 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:54.413 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.413 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.413 BaseBdev2 00:13:54.413 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.413 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:54.413 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:54.413 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:54.413 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:54.413 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:54.413 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:54.413 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:54.413 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.413 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.413 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.413 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:54.413 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.413 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.413 [ 00:13:54.413 { 00:13:54.413 "name": "BaseBdev2", 00:13:54.413 "aliases": [ 00:13:54.413 "ed677dc8-2748-40e7-9e6c-3bafcf4f3b40" 00:13:54.413 ], 00:13:54.413 "product_name": "Malloc disk", 00:13:54.413 "block_size": 512, 00:13:54.413 "num_blocks": 65536, 00:13:54.413 "uuid": "ed677dc8-2748-40e7-9e6c-3bafcf4f3b40", 00:13:54.413 "assigned_rate_limits": { 00:13:54.413 "rw_ios_per_sec": 0, 00:13:54.413 "rw_mbytes_per_sec": 0, 00:13:54.413 "r_mbytes_per_sec": 0, 00:13:54.413 "w_mbytes_per_sec": 0 00:13:54.413 }, 00:13:54.413 "claimed": false, 00:13:54.413 "zoned": false, 00:13:54.413 "supported_io_types": { 00:13:54.413 "read": true, 00:13:54.413 "write": true, 00:13:54.413 "unmap": true, 00:13:54.413 "flush": true, 00:13:54.413 "reset": true, 00:13:54.413 "nvme_admin": false, 00:13:54.413 "nvme_io": false, 00:13:54.413 "nvme_io_md": false, 00:13:54.413 "write_zeroes": true, 00:13:54.413 "zcopy": true, 00:13:54.413 "get_zone_info": false, 00:13:54.413 "zone_management": false, 00:13:54.413 "zone_append": false, 00:13:54.413 "compare": false, 00:13:54.413 "compare_and_write": false, 00:13:54.413 "abort": true, 00:13:54.413 "seek_hole": false, 00:13:54.413 "seek_data": false, 00:13:54.413 "copy": true, 00:13:54.413 "nvme_iov_md": false 00:13:54.413 }, 00:13:54.413 "memory_domains": [ 00:13:54.413 { 00:13:54.413 "dma_device_id": "system", 00:13:54.413 "dma_device_type": 1 00:13:54.413 }, 00:13:54.413 { 00:13:54.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.413 "dma_device_type": 2 00:13:54.413 } 00:13:54.413 ], 00:13:54.414 "driver_specific": {} 00:13:54.414 } 00:13:54.414 ] 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.414 BaseBdev3 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.414 [ 00:13:54.414 { 00:13:54.414 "name": "BaseBdev3", 00:13:54.414 "aliases": [ 00:13:54.414 "df4e58cc-aa5e-42fa-88f5-ec48c2f53fdd" 00:13:54.414 ], 00:13:54.414 "product_name": "Malloc disk", 00:13:54.414 "block_size": 512, 00:13:54.414 "num_blocks": 65536, 00:13:54.414 "uuid": "df4e58cc-aa5e-42fa-88f5-ec48c2f53fdd", 00:13:54.414 "assigned_rate_limits": { 00:13:54.414 "rw_ios_per_sec": 0, 00:13:54.414 "rw_mbytes_per_sec": 0, 00:13:54.414 "r_mbytes_per_sec": 0, 00:13:54.414 "w_mbytes_per_sec": 0 00:13:54.414 }, 00:13:54.414 "claimed": false, 00:13:54.414 "zoned": false, 00:13:54.414 "supported_io_types": { 00:13:54.414 "read": true, 00:13:54.414 "write": true, 00:13:54.414 "unmap": true, 00:13:54.414 "flush": true, 00:13:54.414 "reset": true, 00:13:54.414 "nvme_admin": false, 00:13:54.414 "nvme_io": false, 00:13:54.414 "nvme_io_md": false, 00:13:54.414 "write_zeroes": true, 00:13:54.414 "zcopy": true, 00:13:54.414 "get_zone_info": false, 00:13:54.414 "zone_management": false, 00:13:54.414 "zone_append": false, 00:13:54.414 "compare": false, 00:13:54.414 "compare_and_write": false, 00:13:54.414 "abort": true, 00:13:54.414 "seek_hole": false, 00:13:54.414 "seek_data": false, 00:13:54.414 "copy": true, 00:13:54.414 "nvme_iov_md": false 00:13:54.414 }, 00:13:54.414 "memory_domains": [ 00:13:54.414 { 00:13:54.414 "dma_device_id": "system", 00:13:54.414 "dma_device_type": 1 00:13:54.414 }, 00:13:54.414 { 00:13:54.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.414 "dma_device_type": 2 00:13:54.414 } 00:13:54.414 ], 00:13:54.414 "driver_specific": {} 00:13:54.414 } 00:13:54.414 ] 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.414 [2024-11-18 03:13:57.869405] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:54.414 [2024-11-18 03:13:57.869493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:54.414 [2024-11-18 03:13:57.869534] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:54.414 [2024-11-18 03:13:57.871412] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.414 "name": "Existed_Raid", 00:13:54.414 "uuid": "e43035fb-c156-4a2c-8f9d-861da5716f72", 00:13:54.414 "strip_size_kb": 64, 00:13:54.414 "state": "configuring", 00:13:54.414 "raid_level": "raid5f", 00:13:54.414 "superblock": true, 00:13:54.414 "num_base_bdevs": 3, 00:13:54.414 "num_base_bdevs_discovered": 2, 00:13:54.414 "num_base_bdevs_operational": 3, 00:13:54.414 "base_bdevs_list": [ 00:13:54.414 { 00:13:54.414 "name": "BaseBdev1", 00:13:54.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.414 "is_configured": false, 00:13:54.414 "data_offset": 0, 00:13:54.414 "data_size": 0 00:13:54.414 }, 00:13:54.414 { 00:13:54.414 "name": "BaseBdev2", 00:13:54.414 "uuid": "ed677dc8-2748-40e7-9e6c-3bafcf4f3b40", 00:13:54.414 "is_configured": true, 00:13:54.414 "data_offset": 2048, 00:13:54.414 "data_size": 63488 00:13:54.414 }, 00:13:54.414 { 00:13:54.414 "name": "BaseBdev3", 00:13:54.414 "uuid": "df4e58cc-aa5e-42fa-88f5-ec48c2f53fdd", 00:13:54.414 "is_configured": true, 00:13:54.414 "data_offset": 2048, 00:13:54.414 "data_size": 63488 00:13:54.414 } 00:13:54.414 ] 00:13:54.414 }' 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.414 03:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.984 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:54.984 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.984 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.984 [2024-11-18 03:13:58.316632] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:54.984 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.984 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:54.984 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.984 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.984 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:54.984 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.984 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.984 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.984 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.984 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.984 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.984 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.984 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.984 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.984 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.984 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.984 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.984 "name": "Existed_Raid", 00:13:54.984 "uuid": "e43035fb-c156-4a2c-8f9d-861da5716f72", 00:13:54.984 "strip_size_kb": 64, 00:13:54.984 "state": "configuring", 00:13:54.984 "raid_level": "raid5f", 00:13:54.984 "superblock": true, 00:13:54.985 "num_base_bdevs": 3, 00:13:54.985 "num_base_bdevs_discovered": 1, 00:13:54.985 "num_base_bdevs_operational": 3, 00:13:54.985 "base_bdevs_list": [ 00:13:54.985 { 00:13:54.985 "name": "BaseBdev1", 00:13:54.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.985 "is_configured": false, 00:13:54.985 "data_offset": 0, 00:13:54.985 "data_size": 0 00:13:54.985 }, 00:13:54.985 { 00:13:54.985 "name": null, 00:13:54.985 "uuid": "ed677dc8-2748-40e7-9e6c-3bafcf4f3b40", 00:13:54.985 "is_configured": false, 00:13:54.985 "data_offset": 0, 00:13:54.985 "data_size": 63488 00:13:54.985 }, 00:13:54.985 { 00:13:54.985 "name": "BaseBdev3", 00:13:54.985 "uuid": "df4e58cc-aa5e-42fa-88f5-ec48c2f53fdd", 00:13:54.985 "is_configured": true, 00:13:54.985 "data_offset": 2048, 00:13:54.985 "data_size": 63488 00:13:54.985 } 00:13:54.985 ] 00:13:54.985 }' 00:13:54.985 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.985 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.246 [2024-11-18 03:13:58.774883] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:55.246 BaseBdev1 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.246 [ 00:13:55.246 { 00:13:55.246 "name": "BaseBdev1", 00:13:55.246 "aliases": [ 00:13:55.246 "ddf39613-83d6-4461-811d-5dd9ab421d92" 00:13:55.246 ], 00:13:55.246 "product_name": "Malloc disk", 00:13:55.246 "block_size": 512, 00:13:55.246 "num_blocks": 65536, 00:13:55.246 "uuid": "ddf39613-83d6-4461-811d-5dd9ab421d92", 00:13:55.246 "assigned_rate_limits": { 00:13:55.246 "rw_ios_per_sec": 0, 00:13:55.246 "rw_mbytes_per_sec": 0, 00:13:55.246 "r_mbytes_per_sec": 0, 00:13:55.246 "w_mbytes_per_sec": 0 00:13:55.246 }, 00:13:55.246 "claimed": true, 00:13:55.246 "claim_type": "exclusive_write", 00:13:55.246 "zoned": false, 00:13:55.246 "supported_io_types": { 00:13:55.246 "read": true, 00:13:55.246 "write": true, 00:13:55.246 "unmap": true, 00:13:55.246 "flush": true, 00:13:55.246 "reset": true, 00:13:55.246 "nvme_admin": false, 00:13:55.246 "nvme_io": false, 00:13:55.246 "nvme_io_md": false, 00:13:55.246 "write_zeroes": true, 00:13:55.246 "zcopy": true, 00:13:55.246 "get_zone_info": false, 00:13:55.246 "zone_management": false, 00:13:55.246 "zone_append": false, 00:13:55.246 "compare": false, 00:13:55.246 "compare_and_write": false, 00:13:55.246 "abort": true, 00:13:55.246 "seek_hole": false, 00:13:55.246 "seek_data": false, 00:13:55.246 "copy": true, 00:13:55.246 "nvme_iov_md": false 00:13:55.246 }, 00:13:55.246 "memory_domains": [ 00:13:55.246 { 00:13:55.246 "dma_device_id": "system", 00:13:55.246 "dma_device_type": 1 00:13:55.246 }, 00:13:55.246 { 00:13:55.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.246 "dma_device_type": 2 00:13:55.246 } 00:13:55.246 ], 00:13:55.246 "driver_specific": {} 00:13:55.246 } 00:13:55.246 ] 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.246 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.507 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.507 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.507 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.507 "name": "Existed_Raid", 00:13:55.507 "uuid": "e43035fb-c156-4a2c-8f9d-861da5716f72", 00:13:55.507 "strip_size_kb": 64, 00:13:55.507 "state": "configuring", 00:13:55.507 "raid_level": "raid5f", 00:13:55.507 "superblock": true, 00:13:55.507 "num_base_bdevs": 3, 00:13:55.507 "num_base_bdevs_discovered": 2, 00:13:55.507 "num_base_bdevs_operational": 3, 00:13:55.507 "base_bdevs_list": [ 00:13:55.507 { 00:13:55.507 "name": "BaseBdev1", 00:13:55.507 "uuid": "ddf39613-83d6-4461-811d-5dd9ab421d92", 00:13:55.507 "is_configured": true, 00:13:55.507 "data_offset": 2048, 00:13:55.507 "data_size": 63488 00:13:55.507 }, 00:13:55.507 { 00:13:55.507 "name": null, 00:13:55.507 "uuid": "ed677dc8-2748-40e7-9e6c-3bafcf4f3b40", 00:13:55.507 "is_configured": false, 00:13:55.507 "data_offset": 0, 00:13:55.507 "data_size": 63488 00:13:55.507 }, 00:13:55.507 { 00:13:55.507 "name": "BaseBdev3", 00:13:55.507 "uuid": "df4e58cc-aa5e-42fa-88f5-ec48c2f53fdd", 00:13:55.507 "is_configured": true, 00:13:55.507 "data_offset": 2048, 00:13:55.507 "data_size": 63488 00:13:55.507 } 00:13:55.507 ] 00:13:55.507 }' 00:13:55.507 03:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.507 03:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.767 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:55.767 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.767 03:13:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.767 03:13:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.767 03:13:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.767 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:55.767 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:55.767 03:13:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.767 03:13:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.767 [2024-11-18 03:13:59.314079] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:55.767 03:13:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.767 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:55.767 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.767 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.767 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.768 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.768 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.768 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.768 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.768 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.768 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.768 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.768 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.768 03:13:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.768 03:13:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.027 03:13:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.027 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.027 "name": "Existed_Raid", 00:13:56.027 "uuid": "e43035fb-c156-4a2c-8f9d-861da5716f72", 00:13:56.027 "strip_size_kb": 64, 00:13:56.027 "state": "configuring", 00:13:56.027 "raid_level": "raid5f", 00:13:56.027 "superblock": true, 00:13:56.027 "num_base_bdevs": 3, 00:13:56.027 "num_base_bdevs_discovered": 1, 00:13:56.027 "num_base_bdevs_operational": 3, 00:13:56.027 "base_bdevs_list": [ 00:13:56.027 { 00:13:56.027 "name": "BaseBdev1", 00:13:56.027 "uuid": "ddf39613-83d6-4461-811d-5dd9ab421d92", 00:13:56.027 "is_configured": true, 00:13:56.027 "data_offset": 2048, 00:13:56.027 "data_size": 63488 00:13:56.027 }, 00:13:56.027 { 00:13:56.027 "name": null, 00:13:56.028 "uuid": "ed677dc8-2748-40e7-9e6c-3bafcf4f3b40", 00:13:56.028 "is_configured": false, 00:13:56.028 "data_offset": 0, 00:13:56.028 "data_size": 63488 00:13:56.028 }, 00:13:56.028 { 00:13:56.028 "name": null, 00:13:56.028 "uuid": "df4e58cc-aa5e-42fa-88f5-ec48c2f53fdd", 00:13:56.028 "is_configured": false, 00:13:56.028 "data_offset": 0, 00:13:56.028 "data_size": 63488 00:13:56.028 } 00:13:56.028 ] 00:13:56.028 }' 00:13:56.028 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.028 03:13:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.288 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.288 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:56.288 03:13:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.288 03:13:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.288 03:13:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.288 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:56.288 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:56.288 03:13:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.288 03:13:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.288 [2024-11-18 03:13:59.809254] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:56.288 03:13:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.288 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:56.288 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.288 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.288 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.288 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.288 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.288 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.288 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.288 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.288 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.288 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.288 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.288 03:13:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.288 03:13:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.288 03:13:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.548 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.548 "name": "Existed_Raid", 00:13:56.548 "uuid": "e43035fb-c156-4a2c-8f9d-861da5716f72", 00:13:56.548 "strip_size_kb": 64, 00:13:56.548 "state": "configuring", 00:13:56.548 "raid_level": "raid5f", 00:13:56.548 "superblock": true, 00:13:56.548 "num_base_bdevs": 3, 00:13:56.548 "num_base_bdevs_discovered": 2, 00:13:56.548 "num_base_bdevs_operational": 3, 00:13:56.548 "base_bdevs_list": [ 00:13:56.548 { 00:13:56.548 "name": "BaseBdev1", 00:13:56.548 "uuid": "ddf39613-83d6-4461-811d-5dd9ab421d92", 00:13:56.548 "is_configured": true, 00:13:56.548 "data_offset": 2048, 00:13:56.548 "data_size": 63488 00:13:56.548 }, 00:13:56.548 { 00:13:56.548 "name": null, 00:13:56.548 "uuid": "ed677dc8-2748-40e7-9e6c-3bafcf4f3b40", 00:13:56.548 "is_configured": false, 00:13:56.548 "data_offset": 0, 00:13:56.548 "data_size": 63488 00:13:56.548 }, 00:13:56.548 { 00:13:56.548 "name": "BaseBdev3", 00:13:56.548 "uuid": "df4e58cc-aa5e-42fa-88f5-ec48c2f53fdd", 00:13:56.548 "is_configured": true, 00:13:56.548 "data_offset": 2048, 00:13:56.548 "data_size": 63488 00:13:56.548 } 00:13:56.548 ] 00:13:56.548 }' 00:13:56.548 03:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.548 03:13:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.808 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:56.808 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.808 03:14:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.808 03:14:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.808 03:14:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.808 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:56.808 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:56.808 03:14:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.808 03:14:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.808 [2024-11-18 03:14:00.316416] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:56.808 03:14:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.808 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:56.808 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.808 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.808 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.808 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.808 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.808 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.808 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.808 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.808 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.808 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.808 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.808 03:14:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.808 03:14:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.808 03:14:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.808 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.808 "name": "Existed_Raid", 00:13:56.808 "uuid": "e43035fb-c156-4a2c-8f9d-861da5716f72", 00:13:56.808 "strip_size_kb": 64, 00:13:56.808 "state": "configuring", 00:13:56.808 "raid_level": "raid5f", 00:13:56.808 "superblock": true, 00:13:56.808 "num_base_bdevs": 3, 00:13:56.808 "num_base_bdevs_discovered": 1, 00:13:56.808 "num_base_bdevs_operational": 3, 00:13:56.808 "base_bdevs_list": [ 00:13:56.808 { 00:13:56.808 "name": null, 00:13:56.808 "uuid": "ddf39613-83d6-4461-811d-5dd9ab421d92", 00:13:56.808 "is_configured": false, 00:13:56.808 "data_offset": 0, 00:13:56.808 "data_size": 63488 00:13:56.808 }, 00:13:56.808 { 00:13:56.808 "name": null, 00:13:56.808 "uuid": "ed677dc8-2748-40e7-9e6c-3bafcf4f3b40", 00:13:56.808 "is_configured": false, 00:13:56.808 "data_offset": 0, 00:13:56.808 "data_size": 63488 00:13:56.808 }, 00:13:56.808 { 00:13:56.808 "name": "BaseBdev3", 00:13:56.808 "uuid": "df4e58cc-aa5e-42fa-88f5-ec48c2f53fdd", 00:13:56.808 "is_configured": true, 00:13:56.808 "data_offset": 2048, 00:13:56.808 "data_size": 63488 00:13:56.808 } 00:13:56.808 ] 00:13:56.808 }' 00:13:56.808 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.808 03:14:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.379 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:57.379 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.379 03:14:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.379 03:14:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.379 03:14:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.379 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:57.379 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:57.379 03:14:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.379 03:14:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.379 [2024-11-18 03:14:00.814160] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:57.379 03:14:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.379 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:57.379 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.379 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.379 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:57.379 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.379 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:57.379 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.379 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.379 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.379 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.379 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.379 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.379 03:14:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.379 03:14:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.379 03:14:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.379 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.379 "name": "Existed_Raid", 00:13:57.379 "uuid": "e43035fb-c156-4a2c-8f9d-861da5716f72", 00:13:57.379 "strip_size_kb": 64, 00:13:57.379 "state": "configuring", 00:13:57.379 "raid_level": "raid5f", 00:13:57.379 "superblock": true, 00:13:57.379 "num_base_bdevs": 3, 00:13:57.379 "num_base_bdevs_discovered": 2, 00:13:57.379 "num_base_bdevs_operational": 3, 00:13:57.379 "base_bdevs_list": [ 00:13:57.379 { 00:13:57.379 "name": null, 00:13:57.379 "uuid": "ddf39613-83d6-4461-811d-5dd9ab421d92", 00:13:57.379 "is_configured": false, 00:13:57.379 "data_offset": 0, 00:13:57.379 "data_size": 63488 00:13:57.379 }, 00:13:57.379 { 00:13:57.379 "name": "BaseBdev2", 00:13:57.379 "uuid": "ed677dc8-2748-40e7-9e6c-3bafcf4f3b40", 00:13:57.379 "is_configured": true, 00:13:57.379 "data_offset": 2048, 00:13:57.379 "data_size": 63488 00:13:57.379 }, 00:13:57.379 { 00:13:57.379 "name": "BaseBdev3", 00:13:57.379 "uuid": "df4e58cc-aa5e-42fa-88f5-ec48c2f53fdd", 00:13:57.379 "is_configured": true, 00:13:57.379 "data_offset": 2048, 00:13:57.379 "data_size": 63488 00:13:57.379 } 00:13:57.379 ] 00:13:57.379 }' 00:13:57.379 03:14:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.379 03:14:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ddf39613-83d6-4461-811d-5dd9ab421d92 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.950 [2024-11-18 03:14:01.344290] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:57.950 [2024-11-18 03:14:01.344465] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:57.950 [2024-11-18 03:14:01.344480] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:57.950 [2024-11-18 03:14:01.344716] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:57.950 [2024-11-18 03:14:01.345179] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:57.950 [2024-11-18 03:14:01.345199] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:13:57.950 [2024-11-18 03:14:01.345308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.950 NewBaseBdev 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.950 [ 00:13:57.950 { 00:13:57.950 "name": "NewBaseBdev", 00:13:57.950 "aliases": [ 00:13:57.950 "ddf39613-83d6-4461-811d-5dd9ab421d92" 00:13:57.950 ], 00:13:57.950 "product_name": "Malloc disk", 00:13:57.950 "block_size": 512, 00:13:57.950 "num_blocks": 65536, 00:13:57.950 "uuid": "ddf39613-83d6-4461-811d-5dd9ab421d92", 00:13:57.950 "assigned_rate_limits": { 00:13:57.950 "rw_ios_per_sec": 0, 00:13:57.950 "rw_mbytes_per_sec": 0, 00:13:57.950 "r_mbytes_per_sec": 0, 00:13:57.950 "w_mbytes_per_sec": 0 00:13:57.950 }, 00:13:57.950 "claimed": true, 00:13:57.950 "claim_type": "exclusive_write", 00:13:57.950 "zoned": false, 00:13:57.950 "supported_io_types": { 00:13:57.950 "read": true, 00:13:57.950 "write": true, 00:13:57.950 "unmap": true, 00:13:57.950 "flush": true, 00:13:57.950 "reset": true, 00:13:57.950 "nvme_admin": false, 00:13:57.950 "nvme_io": false, 00:13:57.950 "nvme_io_md": false, 00:13:57.950 "write_zeroes": true, 00:13:57.950 "zcopy": true, 00:13:57.950 "get_zone_info": false, 00:13:57.950 "zone_management": false, 00:13:57.950 "zone_append": false, 00:13:57.950 "compare": false, 00:13:57.950 "compare_and_write": false, 00:13:57.950 "abort": true, 00:13:57.950 "seek_hole": false, 00:13:57.950 "seek_data": false, 00:13:57.950 "copy": true, 00:13:57.950 "nvme_iov_md": false 00:13:57.950 }, 00:13:57.950 "memory_domains": [ 00:13:57.950 { 00:13:57.950 "dma_device_id": "system", 00:13:57.950 "dma_device_type": 1 00:13:57.950 }, 00:13:57.950 { 00:13:57.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.950 "dma_device_type": 2 00:13:57.950 } 00:13:57.950 ], 00:13:57.950 "driver_specific": {} 00:13:57.950 } 00:13:57.950 ] 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.950 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:57.951 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.951 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.951 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.951 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.951 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.951 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.951 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.951 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.951 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.951 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.951 "name": "Existed_Raid", 00:13:57.951 "uuid": "e43035fb-c156-4a2c-8f9d-861da5716f72", 00:13:57.951 "strip_size_kb": 64, 00:13:57.951 "state": "online", 00:13:57.951 "raid_level": "raid5f", 00:13:57.951 "superblock": true, 00:13:57.951 "num_base_bdevs": 3, 00:13:57.951 "num_base_bdevs_discovered": 3, 00:13:57.951 "num_base_bdevs_operational": 3, 00:13:57.951 "base_bdevs_list": [ 00:13:57.951 { 00:13:57.951 "name": "NewBaseBdev", 00:13:57.951 "uuid": "ddf39613-83d6-4461-811d-5dd9ab421d92", 00:13:57.951 "is_configured": true, 00:13:57.951 "data_offset": 2048, 00:13:57.951 "data_size": 63488 00:13:57.951 }, 00:13:57.951 { 00:13:57.951 "name": "BaseBdev2", 00:13:57.951 "uuid": "ed677dc8-2748-40e7-9e6c-3bafcf4f3b40", 00:13:57.951 "is_configured": true, 00:13:57.951 "data_offset": 2048, 00:13:57.951 "data_size": 63488 00:13:57.951 }, 00:13:57.951 { 00:13:57.951 "name": "BaseBdev3", 00:13:57.951 "uuid": "df4e58cc-aa5e-42fa-88f5-ec48c2f53fdd", 00:13:57.951 "is_configured": true, 00:13:57.951 "data_offset": 2048, 00:13:57.951 "data_size": 63488 00:13:57.951 } 00:13:57.951 ] 00:13:57.951 }' 00:13:57.951 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.951 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.521 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:58.521 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:58.521 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:58.521 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:58.521 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:58.521 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:58.521 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:58.521 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:58.521 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.521 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.521 [2024-11-18 03:14:01.839694] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:58.521 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.521 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:58.521 "name": "Existed_Raid", 00:13:58.521 "aliases": [ 00:13:58.521 "e43035fb-c156-4a2c-8f9d-861da5716f72" 00:13:58.521 ], 00:13:58.521 "product_name": "Raid Volume", 00:13:58.521 "block_size": 512, 00:13:58.521 "num_blocks": 126976, 00:13:58.521 "uuid": "e43035fb-c156-4a2c-8f9d-861da5716f72", 00:13:58.521 "assigned_rate_limits": { 00:13:58.521 "rw_ios_per_sec": 0, 00:13:58.521 "rw_mbytes_per_sec": 0, 00:13:58.521 "r_mbytes_per_sec": 0, 00:13:58.521 "w_mbytes_per_sec": 0 00:13:58.521 }, 00:13:58.521 "claimed": false, 00:13:58.521 "zoned": false, 00:13:58.521 "supported_io_types": { 00:13:58.521 "read": true, 00:13:58.521 "write": true, 00:13:58.521 "unmap": false, 00:13:58.521 "flush": false, 00:13:58.521 "reset": true, 00:13:58.521 "nvme_admin": false, 00:13:58.521 "nvme_io": false, 00:13:58.521 "nvme_io_md": false, 00:13:58.521 "write_zeroes": true, 00:13:58.521 "zcopy": false, 00:13:58.521 "get_zone_info": false, 00:13:58.521 "zone_management": false, 00:13:58.521 "zone_append": false, 00:13:58.521 "compare": false, 00:13:58.521 "compare_and_write": false, 00:13:58.521 "abort": false, 00:13:58.521 "seek_hole": false, 00:13:58.521 "seek_data": false, 00:13:58.521 "copy": false, 00:13:58.521 "nvme_iov_md": false 00:13:58.521 }, 00:13:58.521 "driver_specific": { 00:13:58.521 "raid": { 00:13:58.521 "uuid": "e43035fb-c156-4a2c-8f9d-861da5716f72", 00:13:58.521 "strip_size_kb": 64, 00:13:58.521 "state": "online", 00:13:58.521 "raid_level": "raid5f", 00:13:58.521 "superblock": true, 00:13:58.521 "num_base_bdevs": 3, 00:13:58.521 "num_base_bdevs_discovered": 3, 00:13:58.521 "num_base_bdevs_operational": 3, 00:13:58.521 "base_bdevs_list": [ 00:13:58.521 { 00:13:58.521 "name": "NewBaseBdev", 00:13:58.521 "uuid": "ddf39613-83d6-4461-811d-5dd9ab421d92", 00:13:58.521 "is_configured": true, 00:13:58.521 "data_offset": 2048, 00:13:58.521 "data_size": 63488 00:13:58.521 }, 00:13:58.521 { 00:13:58.521 "name": "BaseBdev2", 00:13:58.521 "uuid": "ed677dc8-2748-40e7-9e6c-3bafcf4f3b40", 00:13:58.521 "is_configured": true, 00:13:58.521 "data_offset": 2048, 00:13:58.521 "data_size": 63488 00:13:58.521 }, 00:13:58.521 { 00:13:58.521 "name": "BaseBdev3", 00:13:58.521 "uuid": "df4e58cc-aa5e-42fa-88f5-ec48c2f53fdd", 00:13:58.521 "is_configured": true, 00:13:58.521 "data_offset": 2048, 00:13:58.521 "data_size": 63488 00:13:58.521 } 00:13:58.521 ] 00:13:58.521 } 00:13:58.521 } 00:13:58.521 }' 00:13:58.521 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:58.521 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:58.521 BaseBdev2 00:13:58.521 BaseBdev3' 00:13:58.521 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:58.521 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:58.521 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:58.521 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:58.521 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:58.521 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.521 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.521 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.521 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:58.521 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:58.521 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:58.521 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:58.521 03:14:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:58.521 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.521 03:14:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.521 03:14:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.521 03:14:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:58.521 03:14:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:58.521 03:14:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:58.521 03:14:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:58.521 03:14:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.521 03:14:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.521 03:14:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:58.521 03:14:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.521 03:14:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:58.521 03:14:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:58.521 03:14:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:58.521 03:14:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.521 03:14:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.521 [2024-11-18 03:14:02.087095] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:58.521 [2024-11-18 03:14:02.087124] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:58.521 [2024-11-18 03:14:02.087225] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:58.521 [2024-11-18 03:14:02.087472] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:58.521 [2024-11-18 03:14:02.087485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:13:58.521 03:14:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.521 03:14:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 91192 00:13:58.521 03:14:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 91192 ']' 00:13:58.522 03:14:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 91192 00:13:58.781 03:14:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:58.781 03:14:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:58.781 03:14:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91192 00:13:58.781 03:14:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:58.781 killing process with pid 91192 00:13:58.781 03:14:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:58.781 03:14:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91192' 00:13:58.781 03:14:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 91192 00:13:58.781 [2024-11-18 03:14:02.133548] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:58.781 03:14:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 91192 00:13:58.781 [2024-11-18 03:14:02.165213] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:59.041 03:14:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:59.041 00:13:59.041 real 0m8.833s 00:13:59.041 user 0m15.109s 00:13:59.041 sys 0m1.802s 00:13:59.041 03:14:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:59.041 03:14:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.041 ************************************ 00:13:59.041 END TEST raid5f_state_function_test_sb 00:13:59.041 ************************************ 00:13:59.041 03:14:02 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:13:59.041 03:14:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:59.041 03:14:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:59.041 03:14:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:59.041 ************************************ 00:13:59.041 START TEST raid5f_superblock_test 00:13:59.041 ************************************ 00:13:59.041 03:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:13:59.041 03:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:13:59.041 03:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:59.041 03:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:59.041 03:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:59.041 03:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:59.041 03:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:59.041 03:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:59.041 03:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:59.041 03:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:59.041 03:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:59.041 03:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:59.041 03:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:59.041 03:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:59.041 03:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:13:59.041 03:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:59.041 03:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:59.041 03:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=91790 00:13:59.041 03:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:59.041 03:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 91790 00:13:59.041 03:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 91790 ']' 00:13:59.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.041 03:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.041 03:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:59.041 03:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.041 03:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:59.041 03:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.041 [2024-11-18 03:14:02.567416] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:59.041 [2024-11-18 03:14:02.567627] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91790 ] 00:13:59.301 [2024-11-18 03:14:02.729476] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.301 [2024-11-18 03:14:02.779796] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.301 [2024-11-18 03:14:02.821817] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.301 [2024-11-18 03:14:02.821855] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.872 03:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:59.872 03:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:59.872 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:59.872 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:59.872 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:59.872 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:59.872 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:59.872 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:59.872 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:59.872 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:59.872 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:59.872 03:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.872 03:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.132 malloc1 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.132 [2024-11-18 03:14:03.456016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:00.132 [2024-11-18 03:14:03.456136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.132 [2024-11-18 03:14:03.456190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:00.132 [2024-11-18 03:14:03.456250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.132 [2024-11-18 03:14:03.458432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.132 [2024-11-18 03:14:03.458506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:00.132 pt1 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.132 malloc2 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.132 [2024-11-18 03:14:03.499429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:00.132 [2024-11-18 03:14:03.499550] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.132 [2024-11-18 03:14:03.499592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:00.132 [2024-11-18 03:14:03.499635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.132 [2024-11-18 03:14:03.502246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.132 [2024-11-18 03:14:03.502317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:00.132 pt2 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.132 malloc3 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.132 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:00.133 03:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.133 03:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.133 [2024-11-18 03:14:03.532312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:00.133 [2024-11-18 03:14:03.532422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.133 [2024-11-18 03:14:03.532467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:00.133 [2024-11-18 03:14:03.532499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.133 [2024-11-18 03:14:03.534871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.133 [2024-11-18 03:14:03.534970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:00.133 pt3 00:14:00.133 03:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.133 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:00.133 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:00.133 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:00.133 03:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.133 03:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.133 [2024-11-18 03:14:03.544340] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:00.133 [2024-11-18 03:14:03.546276] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:00.133 [2024-11-18 03:14:03.546385] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:00.133 [2024-11-18 03:14:03.546586] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:00.133 [2024-11-18 03:14:03.546636] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:00.133 [2024-11-18 03:14:03.546924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:14:00.133 [2024-11-18 03:14:03.547404] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:00.133 [2024-11-18 03:14:03.547457] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:00.133 [2024-11-18 03:14:03.547618] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.133 03:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.133 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:00.133 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.133 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.133 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.133 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.133 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.133 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.133 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.133 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.133 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.133 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.133 03:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.133 03:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.133 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.133 03:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.133 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.133 "name": "raid_bdev1", 00:14:00.133 "uuid": "9e7c0070-bd8a-4d09-a99b-e2fd0b1b8ede", 00:14:00.133 "strip_size_kb": 64, 00:14:00.133 "state": "online", 00:14:00.133 "raid_level": "raid5f", 00:14:00.133 "superblock": true, 00:14:00.133 "num_base_bdevs": 3, 00:14:00.133 "num_base_bdevs_discovered": 3, 00:14:00.133 "num_base_bdevs_operational": 3, 00:14:00.133 "base_bdevs_list": [ 00:14:00.133 { 00:14:00.133 "name": "pt1", 00:14:00.133 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:00.133 "is_configured": true, 00:14:00.133 "data_offset": 2048, 00:14:00.133 "data_size": 63488 00:14:00.133 }, 00:14:00.133 { 00:14:00.133 "name": "pt2", 00:14:00.133 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:00.133 "is_configured": true, 00:14:00.133 "data_offset": 2048, 00:14:00.133 "data_size": 63488 00:14:00.133 }, 00:14:00.133 { 00:14:00.133 "name": "pt3", 00:14:00.133 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:00.133 "is_configured": true, 00:14:00.133 "data_offset": 2048, 00:14:00.133 "data_size": 63488 00:14:00.133 } 00:14:00.133 ] 00:14:00.133 }' 00:14:00.133 03:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.133 03:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.704 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:00.704 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:00.704 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:00.704 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:00.704 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:00.704 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:00.704 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:00.704 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:00.704 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.704 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.704 [2024-11-18 03:14:04.020388] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:00.704 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.704 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:00.704 "name": "raid_bdev1", 00:14:00.704 "aliases": [ 00:14:00.704 "9e7c0070-bd8a-4d09-a99b-e2fd0b1b8ede" 00:14:00.704 ], 00:14:00.704 "product_name": "Raid Volume", 00:14:00.704 "block_size": 512, 00:14:00.704 "num_blocks": 126976, 00:14:00.704 "uuid": "9e7c0070-bd8a-4d09-a99b-e2fd0b1b8ede", 00:14:00.704 "assigned_rate_limits": { 00:14:00.704 "rw_ios_per_sec": 0, 00:14:00.704 "rw_mbytes_per_sec": 0, 00:14:00.704 "r_mbytes_per_sec": 0, 00:14:00.704 "w_mbytes_per_sec": 0 00:14:00.704 }, 00:14:00.704 "claimed": false, 00:14:00.704 "zoned": false, 00:14:00.704 "supported_io_types": { 00:14:00.704 "read": true, 00:14:00.704 "write": true, 00:14:00.704 "unmap": false, 00:14:00.704 "flush": false, 00:14:00.704 "reset": true, 00:14:00.704 "nvme_admin": false, 00:14:00.704 "nvme_io": false, 00:14:00.704 "nvme_io_md": false, 00:14:00.704 "write_zeroes": true, 00:14:00.704 "zcopy": false, 00:14:00.704 "get_zone_info": false, 00:14:00.704 "zone_management": false, 00:14:00.704 "zone_append": false, 00:14:00.704 "compare": false, 00:14:00.704 "compare_and_write": false, 00:14:00.704 "abort": false, 00:14:00.704 "seek_hole": false, 00:14:00.704 "seek_data": false, 00:14:00.704 "copy": false, 00:14:00.704 "nvme_iov_md": false 00:14:00.704 }, 00:14:00.704 "driver_specific": { 00:14:00.704 "raid": { 00:14:00.704 "uuid": "9e7c0070-bd8a-4d09-a99b-e2fd0b1b8ede", 00:14:00.704 "strip_size_kb": 64, 00:14:00.704 "state": "online", 00:14:00.704 "raid_level": "raid5f", 00:14:00.704 "superblock": true, 00:14:00.704 "num_base_bdevs": 3, 00:14:00.704 "num_base_bdevs_discovered": 3, 00:14:00.704 "num_base_bdevs_operational": 3, 00:14:00.704 "base_bdevs_list": [ 00:14:00.704 { 00:14:00.704 "name": "pt1", 00:14:00.704 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:00.704 "is_configured": true, 00:14:00.704 "data_offset": 2048, 00:14:00.704 "data_size": 63488 00:14:00.704 }, 00:14:00.704 { 00:14:00.704 "name": "pt2", 00:14:00.704 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:00.704 "is_configured": true, 00:14:00.704 "data_offset": 2048, 00:14:00.704 "data_size": 63488 00:14:00.704 }, 00:14:00.704 { 00:14:00.704 "name": "pt3", 00:14:00.704 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:00.704 "is_configured": true, 00:14:00.704 "data_offset": 2048, 00:14:00.704 "data_size": 63488 00:14:00.704 } 00:14:00.704 ] 00:14:00.704 } 00:14:00.704 } 00:14:00.704 }' 00:14:00.704 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:00.704 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:00.704 pt2 00:14:00.704 pt3' 00:14:00.705 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.705 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:00.705 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.705 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:00.705 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.705 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.705 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.705 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.705 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.705 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.705 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.705 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:00.705 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.705 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.705 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.705 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.705 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.705 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.705 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.705 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.705 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:00.705 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.705 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.705 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.966 [2024-11-18 03:14:04.291869] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9e7c0070-bd8a-4d09-a99b-e2fd0b1b8ede 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9e7c0070-bd8a-4d09-a99b-e2fd0b1b8ede ']' 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.966 [2024-11-18 03:14:04.339587] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:00.966 [2024-11-18 03:14:04.339617] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:00.966 [2024-11-18 03:14:04.339694] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:00.966 [2024-11-18 03:14:04.339765] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:00.966 [2024-11-18 03:14:04.339777] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.966 [2024-11-18 03:14:04.495355] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:00.966 [2024-11-18 03:14:04.497270] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:00.966 [2024-11-18 03:14:04.497322] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:00.966 [2024-11-18 03:14:04.497369] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:00.966 [2024-11-18 03:14:04.497415] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:00.966 [2024-11-18 03:14:04.497434] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:00.966 [2024-11-18 03:14:04.497447] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:00.966 [2024-11-18 03:14:04.497466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:14:00.966 request: 00:14:00.966 { 00:14:00.966 "name": "raid_bdev1", 00:14:00.966 "raid_level": "raid5f", 00:14:00.966 "base_bdevs": [ 00:14:00.966 "malloc1", 00:14:00.966 "malloc2", 00:14:00.966 "malloc3" 00:14:00.966 ], 00:14:00.966 "strip_size_kb": 64, 00:14:00.966 "superblock": false, 00:14:00.966 "method": "bdev_raid_create", 00:14:00.966 "req_id": 1 00:14:00.966 } 00:14:00.966 Got JSON-RPC error response 00:14:00.966 response: 00:14:00.966 { 00:14:00.966 "code": -17, 00:14:00.966 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:00.966 } 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.966 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.227 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:01.227 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:01.227 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:01.227 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.227 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.227 [2024-11-18 03:14:04.547233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:01.227 [2024-11-18 03:14:04.547290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.227 [2024-11-18 03:14:04.547307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:01.227 [2024-11-18 03:14:04.547317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.227 [2024-11-18 03:14:04.549491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.227 [2024-11-18 03:14:04.549579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:01.227 [2024-11-18 03:14:04.549661] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:01.227 [2024-11-18 03:14:04.549722] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:01.227 pt1 00:14:01.227 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.227 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:01.227 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.227 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.227 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.227 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.227 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.227 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.227 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.227 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.227 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.227 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.227 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.227 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.227 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.227 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.227 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.227 "name": "raid_bdev1", 00:14:01.227 "uuid": "9e7c0070-bd8a-4d09-a99b-e2fd0b1b8ede", 00:14:01.227 "strip_size_kb": 64, 00:14:01.227 "state": "configuring", 00:14:01.227 "raid_level": "raid5f", 00:14:01.227 "superblock": true, 00:14:01.227 "num_base_bdevs": 3, 00:14:01.227 "num_base_bdevs_discovered": 1, 00:14:01.227 "num_base_bdevs_operational": 3, 00:14:01.227 "base_bdevs_list": [ 00:14:01.227 { 00:14:01.227 "name": "pt1", 00:14:01.227 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:01.227 "is_configured": true, 00:14:01.227 "data_offset": 2048, 00:14:01.227 "data_size": 63488 00:14:01.227 }, 00:14:01.227 { 00:14:01.227 "name": null, 00:14:01.227 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:01.227 "is_configured": false, 00:14:01.227 "data_offset": 2048, 00:14:01.227 "data_size": 63488 00:14:01.227 }, 00:14:01.227 { 00:14:01.227 "name": null, 00:14:01.227 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:01.227 "is_configured": false, 00:14:01.227 "data_offset": 2048, 00:14:01.227 "data_size": 63488 00:14:01.227 } 00:14:01.227 ] 00:14:01.227 }' 00:14:01.227 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.227 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.488 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:01.488 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:01.488 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.488 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.488 [2024-11-18 03:14:04.918657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:01.488 [2024-11-18 03:14:04.918788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.488 [2024-11-18 03:14:04.918833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:01.488 [2024-11-18 03:14:04.918867] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.488 [2024-11-18 03:14:04.919293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.488 [2024-11-18 03:14:04.919362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:01.488 [2024-11-18 03:14:04.919469] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:01.488 [2024-11-18 03:14:04.919521] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:01.488 pt2 00:14:01.488 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.488 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:01.488 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.488 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.488 [2024-11-18 03:14:04.926638] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:01.488 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.488 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:01.488 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.488 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.488 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.488 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.488 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.488 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.488 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.488 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.488 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.488 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.488 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.488 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.488 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.488 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.488 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.488 "name": "raid_bdev1", 00:14:01.488 "uuid": "9e7c0070-bd8a-4d09-a99b-e2fd0b1b8ede", 00:14:01.488 "strip_size_kb": 64, 00:14:01.488 "state": "configuring", 00:14:01.488 "raid_level": "raid5f", 00:14:01.488 "superblock": true, 00:14:01.488 "num_base_bdevs": 3, 00:14:01.488 "num_base_bdevs_discovered": 1, 00:14:01.488 "num_base_bdevs_operational": 3, 00:14:01.488 "base_bdevs_list": [ 00:14:01.488 { 00:14:01.488 "name": "pt1", 00:14:01.488 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:01.488 "is_configured": true, 00:14:01.488 "data_offset": 2048, 00:14:01.488 "data_size": 63488 00:14:01.488 }, 00:14:01.488 { 00:14:01.488 "name": null, 00:14:01.488 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:01.488 "is_configured": false, 00:14:01.488 "data_offset": 0, 00:14:01.488 "data_size": 63488 00:14:01.488 }, 00:14:01.488 { 00:14:01.488 "name": null, 00:14:01.488 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:01.488 "is_configured": false, 00:14:01.488 "data_offset": 2048, 00:14:01.488 "data_size": 63488 00:14:01.488 } 00:14:01.488 ] 00:14:01.488 }' 00:14:01.488 03:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.488 03:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.059 [2024-11-18 03:14:05.389824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:02.059 [2024-11-18 03:14:05.389975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.059 [2024-11-18 03:14:05.390000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:02.059 [2024-11-18 03:14:05.390009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.059 [2024-11-18 03:14:05.390404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.059 [2024-11-18 03:14:05.390428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:02.059 [2024-11-18 03:14:05.390506] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:02.059 [2024-11-18 03:14:05.390527] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:02.059 pt2 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.059 [2024-11-18 03:14:05.397782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:02.059 [2024-11-18 03:14:05.397828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.059 [2024-11-18 03:14:05.397846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:02.059 [2024-11-18 03:14:05.397854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.059 [2024-11-18 03:14:05.398204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.059 [2024-11-18 03:14:05.398232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:02.059 [2024-11-18 03:14:05.398293] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:02.059 [2024-11-18 03:14:05.398310] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:02.059 [2024-11-18 03:14:05.398405] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:02.059 [2024-11-18 03:14:05.398417] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:02.059 [2024-11-18 03:14:05.398632] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:02.059 [2024-11-18 03:14:05.399069] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:02.059 [2024-11-18 03:14:05.399085] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:14:02.059 [2024-11-18 03:14:05.399183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.059 pt3 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.059 "name": "raid_bdev1", 00:14:02.059 "uuid": "9e7c0070-bd8a-4d09-a99b-e2fd0b1b8ede", 00:14:02.059 "strip_size_kb": 64, 00:14:02.059 "state": "online", 00:14:02.059 "raid_level": "raid5f", 00:14:02.059 "superblock": true, 00:14:02.059 "num_base_bdevs": 3, 00:14:02.059 "num_base_bdevs_discovered": 3, 00:14:02.059 "num_base_bdevs_operational": 3, 00:14:02.059 "base_bdevs_list": [ 00:14:02.059 { 00:14:02.059 "name": "pt1", 00:14:02.059 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:02.059 "is_configured": true, 00:14:02.059 "data_offset": 2048, 00:14:02.059 "data_size": 63488 00:14:02.059 }, 00:14:02.059 { 00:14:02.059 "name": "pt2", 00:14:02.059 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:02.059 "is_configured": true, 00:14:02.059 "data_offset": 2048, 00:14:02.059 "data_size": 63488 00:14:02.059 }, 00:14:02.059 { 00:14:02.059 "name": "pt3", 00:14:02.059 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:02.059 "is_configured": true, 00:14:02.059 "data_offset": 2048, 00:14:02.059 "data_size": 63488 00:14:02.059 } 00:14:02.059 ] 00:14:02.059 }' 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.059 03:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.320 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:02.320 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:02.320 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:02.320 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:02.320 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:02.320 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:02.320 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:02.320 03:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.320 03:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.320 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:02.320 [2024-11-18 03:14:05.841247] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:02.320 03:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.320 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:02.320 "name": "raid_bdev1", 00:14:02.320 "aliases": [ 00:14:02.320 "9e7c0070-bd8a-4d09-a99b-e2fd0b1b8ede" 00:14:02.320 ], 00:14:02.320 "product_name": "Raid Volume", 00:14:02.320 "block_size": 512, 00:14:02.320 "num_blocks": 126976, 00:14:02.320 "uuid": "9e7c0070-bd8a-4d09-a99b-e2fd0b1b8ede", 00:14:02.320 "assigned_rate_limits": { 00:14:02.320 "rw_ios_per_sec": 0, 00:14:02.320 "rw_mbytes_per_sec": 0, 00:14:02.320 "r_mbytes_per_sec": 0, 00:14:02.320 "w_mbytes_per_sec": 0 00:14:02.320 }, 00:14:02.320 "claimed": false, 00:14:02.320 "zoned": false, 00:14:02.320 "supported_io_types": { 00:14:02.320 "read": true, 00:14:02.320 "write": true, 00:14:02.320 "unmap": false, 00:14:02.320 "flush": false, 00:14:02.320 "reset": true, 00:14:02.320 "nvme_admin": false, 00:14:02.320 "nvme_io": false, 00:14:02.320 "nvme_io_md": false, 00:14:02.320 "write_zeroes": true, 00:14:02.320 "zcopy": false, 00:14:02.320 "get_zone_info": false, 00:14:02.320 "zone_management": false, 00:14:02.320 "zone_append": false, 00:14:02.320 "compare": false, 00:14:02.320 "compare_and_write": false, 00:14:02.320 "abort": false, 00:14:02.320 "seek_hole": false, 00:14:02.320 "seek_data": false, 00:14:02.320 "copy": false, 00:14:02.320 "nvme_iov_md": false 00:14:02.320 }, 00:14:02.320 "driver_specific": { 00:14:02.320 "raid": { 00:14:02.320 "uuid": "9e7c0070-bd8a-4d09-a99b-e2fd0b1b8ede", 00:14:02.320 "strip_size_kb": 64, 00:14:02.320 "state": "online", 00:14:02.320 "raid_level": "raid5f", 00:14:02.320 "superblock": true, 00:14:02.320 "num_base_bdevs": 3, 00:14:02.320 "num_base_bdevs_discovered": 3, 00:14:02.320 "num_base_bdevs_operational": 3, 00:14:02.320 "base_bdevs_list": [ 00:14:02.320 { 00:14:02.320 "name": "pt1", 00:14:02.320 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:02.320 "is_configured": true, 00:14:02.320 "data_offset": 2048, 00:14:02.320 "data_size": 63488 00:14:02.320 }, 00:14:02.320 { 00:14:02.320 "name": "pt2", 00:14:02.320 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:02.320 "is_configured": true, 00:14:02.320 "data_offset": 2048, 00:14:02.320 "data_size": 63488 00:14:02.320 }, 00:14:02.320 { 00:14:02.320 "name": "pt3", 00:14:02.320 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:02.320 "is_configured": true, 00:14:02.320 "data_offset": 2048, 00:14:02.320 "data_size": 63488 00:14:02.320 } 00:14:02.320 ] 00:14:02.320 } 00:14:02.320 } 00:14:02.320 }' 00:14:02.320 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:02.581 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:02.581 pt2 00:14:02.581 pt3' 00:14:02.581 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.581 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:02.581 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.581 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:02.581 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.581 03:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.581 03:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.581 03:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.581 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.581 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.581 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.581 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:02.581 03:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.581 03:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.581 03:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.581 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.581 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.581 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.581 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.581 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.581 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:02.581 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.581 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.581 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.581 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.581 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.581 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:02.581 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:02.581 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.581 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.581 [2024-11-18 03:14:06.104727] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:02.581 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.581 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9e7c0070-bd8a-4d09-a99b-e2fd0b1b8ede '!=' 9e7c0070-bd8a-4d09-a99b-e2fd0b1b8ede ']' 00:14:02.581 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:02.581 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:02.581 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:02.581 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:02.581 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.581 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.581 [2024-11-18 03:14:06.148515] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:02.581 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.581 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:02.581 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.581 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.581 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:02.581 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.581 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:02.842 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.842 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.842 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.842 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.842 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.842 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.842 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.842 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.842 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.842 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.842 "name": "raid_bdev1", 00:14:02.842 "uuid": "9e7c0070-bd8a-4d09-a99b-e2fd0b1b8ede", 00:14:02.842 "strip_size_kb": 64, 00:14:02.842 "state": "online", 00:14:02.842 "raid_level": "raid5f", 00:14:02.842 "superblock": true, 00:14:02.842 "num_base_bdevs": 3, 00:14:02.842 "num_base_bdevs_discovered": 2, 00:14:02.842 "num_base_bdevs_operational": 2, 00:14:02.842 "base_bdevs_list": [ 00:14:02.842 { 00:14:02.842 "name": null, 00:14:02.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.842 "is_configured": false, 00:14:02.842 "data_offset": 0, 00:14:02.842 "data_size": 63488 00:14:02.842 }, 00:14:02.842 { 00:14:02.842 "name": "pt2", 00:14:02.842 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:02.842 "is_configured": true, 00:14:02.842 "data_offset": 2048, 00:14:02.842 "data_size": 63488 00:14:02.842 }, 00:14:02.842 { 00:14:02.842 "name": "pt3", 00:14:02.842 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:02.842 "is_configured": true, 00:14:02.842 "data_offset": 2048, 00:14:02.842 "data_size": 63488 00:14:02.842 } 00:14:02.842 ] 00:14:02.842 }' 00:14:02.842 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.842 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.103 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:03.103 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.103 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.103 [2024-11-18 03:14:06.639638] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:03.103 [2024-11-18 03:14:06.639730] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:03.103 [2024-11-18 03:14:06.639826] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.103 [2024-11-18 03:14:06.639885] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:03.103 [2024-11-18 03:14:06.639895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:14:03.103 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.103 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:03.103 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.103 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.103 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.103 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.103 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:03.103 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:03.103 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:03.103 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:03.103 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:03.103 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.103 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.363 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.363 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:03.363 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:03.363 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:03.363 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.363 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.363 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.363 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:03.363 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:03.363 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:03.363 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:03.363 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:03.363 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.363 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.363 [2024-11-18 03:14:06.703507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:03.363 [2024-11-18 03:14:06.703597] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.363 [2024-11-18 03:14:06.703631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:03.363 [2024-11-18 03:14:06.703657] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.363 [2024-11-18 03:14:06.705825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.363 [2024-11-18 03:14:06.705895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:03.363 [2024-11-18 03:14:06.705994] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:03.364 [2024-11-18 03:14:06.706076] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:03.364 pt2 00:14:03.364 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.364 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:03.364 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.364 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:03.364 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:03.364 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.364 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:03.364 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.364 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.364 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.364 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.364 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.364 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.364 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.364 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.364 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.364 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.364 "name": "raid_bdev1", 00:14:03.364 "uuid": "9e7c0070-bd8a-4d09-a99b-e2fd0b1b8ede", 00:14:03.364 "strip_size_kb": 64, 00:14:03.364 "state": "configuring", 00:14:03.364 "raid_level": "raid5f", 00:14:03.364 "superblock": true, 00:14:03.364 "num_base_bdevs": 3, 00:14:03.364 "num_base_bdevs_discovered": 1, 00:14:03.364 "num_base_bdevs_operational": 2, 00:14:03.364 "base_bdevs_list": [ 00:14:03.364 { 00:14:03.364 "name": null, 00:14:03.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.364 "is_configured": false, 00:14:03.364 "data_offset": 2048, 00:14:03.364 "data_size": 63488 00:14:03.364 }, 00:14:03.364 { 00:14:03.364 "name": "pt2", 00:14:03.364 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:03.364 "is_configured": true, 00:14:03.364 "data_offset": 2048, 00:14:03.364 "data_size": 63488 00:14:03.364 }, 00:14:03.364 { 00:14:03.364 "name": null, 00:14:03.364 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:03.364 "is_configured": false, 00:14:03.364 "data_offset": 2048, 00:14:03.364 "data_size": 63488 00:14:03.364 } 00:14:03.364 ] 00:14:03.364 }' 00:14:03.364 03:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.364 03:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.625 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:03.625 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:03.625 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:14:03.625 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:03.625 03:14:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.625 03:14:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.625 [2024-11-18 03:14:07.142790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:03.625 [2024-11-18 03:14:07.142858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.625 [2024-11-18 03:14:07.142880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:03.625 [2024-11-18 03:14:07.142889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.625 [2024-11-18 03:14:07.143322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.625 [2024-11-18 03:14:07.143346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:03.625 [2024-11-18 03:14:07.143419] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:03.625 [2024-11-18 03:14:07.143445] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:03.625 [2024-11-18 03:14:07.143540] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:03.625 [2024-11-18 03:14:07.143548] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:03.625 [2024-11-18 03:14:07.143775] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:03.625 [2024-11-18 03:14:07.144232] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:03.625 [2024-11-18 03:14:07.144252] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:14:03.625 [2024-11-18 03:14:07.144512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.625 pt3 00:14:03.625 03:14:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.625 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:03.625 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.625 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.625 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:03.625 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.625 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:03.625 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.625 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.625 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.625 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.625 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.625 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.625 03:14:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.625 03:14:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.625 03:14:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.625 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.625 "name": "raid_bdev1", 00:14:03.625 "uuid": "9e7c0070-bd8a-4d09-a99b-e2fd0b1b8ede", 00:14:03.625 "strip_size_kb": 64, 00:14:03.625 "state": "online", 00:14:03.625 "raid_level": "raid5f", 00:14:03.625 "superblock": true, 00:14:03.625 "num_base_bdevs": 3, 00:14:03.625 "num_base_bdevs_discovered": 2, 00:14:03.625 "num_base_bdevs_operational": 2, 00:14:03.625 "base_bdevs_list": [ 00:14:03.625 { 00:14:03.625 "name": null, 00:14:03.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.625 "is_configured": false, 00:14:03.625 "data_offset": 2048, 00:14:03.625 "data_size": 63488 00:14:03.625 }, 00:14:03.625 { 00:14:03.625 "name": "pt2", 00:14:03.625 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:03.625 "is_configured": true, 00:14:03.625 "data_offset": 2048, 00:14:03.625 "data_size": 63488 00:14:03.625 }, 00:14:03.625 { 00:14:03.625 "name": "pt3", 00:14:03.625 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:03.625 "is_configured": true, 00:14:03.625 "data_offset": 2048, 00:14:03.625 "data_size": 63488 00:14:03.625 } 00:14:03.625 ] 00:14:03.625 }' 00:14:03.625 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.625 03:14:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.202 [2024-11-18 03:14:07.554070] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:04.202 [2024-11-18 03:14:07.554163] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:04.202 [2024-11-18 03:14:07.554262] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:04.202 [2024-11-18 03:14:07.554355] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:04.202 [2024-11-18 03:14:07.554398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.202 [2024-11-18 03:14:07.609947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:04.202 [2024-11-18 03:14:07.610078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.202 [2024-11-18 03:14:07.610115] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:04.202 [2024-11-18 03:14:07.610153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.202 [2024-11-18 03:14:07.612544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.202 [2024-11-18 03:14:07.612623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:04.202 [2024-11-18 03:14:07.612715] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:04.202 [2024-11-18 03:14:07.612796] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:04.202 [2024-11-18 03:14:07.612980] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:04.202 [2024-11-18 03:14:07.613052] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:04.202 [2024-11-18 03:14:07.613094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:14:04.202 [2024-11-18 03:14:07.613189] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:04.202 pt1 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.202 "name": "raid_bdev1", 00:14:04.202 "uuid": "9e7c0070-bd8a-4d09-a99b-e2fd0b1b8ede", 00:14:04.202 "strip_size_kb": 64, 00:14:04.202 "state": "configuring", 00:14:04.202 "raid_level": "raid5f", 00:14:04.202 "superblock": true, 00:14:04.202 "num_base_bdevs": 3, 00:14:04.202 "num_base_bdevs_discovered": 1, 00:14:04.202 "num_base_bdevs_operational": 2, 00:14:04.202 "base_bdevs_list": [ 00:14:04.202 { 00:14:04.202 "name": null, 00:14:04.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.202 "is_configured": false, 00:14:04.202 "data_offset": 2048, 00:14:04.202 "data_size": 63488 00:14:04.202 }, 00:14:04.202 { 00:14:04.202 "name": "pt2", 00:14:04.202 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:04.202 "is_configured": true, 00:14:04.202 "data_offset": 2048, 00:14:04.202 "data_size": 63488 00:14:04.202 }, 00:14:04.202 { 00:14:04.202 "name": null, 00:14:04.202 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:04.202 "is_configured": false, 00:14:04.202 "data_offset": 2048, 00:14:04.202 "data_size": 63488 00:14:04.202 } 00:14:04.202 ] 00:14:04.202 }' 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.202 03:14:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.816 03:14:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:04.816 03:14:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:04.816 03:14:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.816 03:14:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.816 03:14:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.816 03:14:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:04.816 03:14:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:04.816 03:14:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.816 03:14:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.816 [2024-11-18 03:14:08.121091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:04.816 [2024-11-18 03:14:08.121157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.816 [2024-11-18 03:14:08.121175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:04.816 [2024-11-18 03:14:08.121184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.816 [2024-11-18 03:14:08.121583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.816 [2024-11-18 03:14:08.121604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:04.816 [2024-11-18 03:14:08.121677] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:04.816 [2024-11-18 03:14:08.121698] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:04.816 [2024-11-18 03:14:08.121783] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:14:04.816 [2024-11-18 03:14:08.121795] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:04.816 [2024-11-18 03:14:08.122051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:04.816 [2024-11-18 03:14:08.122572] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:14:04.816 [2024-11-18 03:14:08.122591] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:14:04.816 [2024-11-18 03:14:08.122764] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.816 pt3 00:14:04.816 03:14:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.816 03:14:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:04.816 03:14:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.816 03:14:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.816 03:14:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:04.816 03:14:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.816 03:14:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:04.816 03:14:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.816 03:14:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.816 03:14:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.816 03:14:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.816 03:14:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.816 03:14:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.816 03:14:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.816 03:14:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.816 03:14:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.816 03:14:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.816 "name": "raid_bdev1", 00:14:04.816 "uuid": "9e7c0070-bd8a-4d09-a99b-e2fd0b1b8ede", 00:14:04.816 "strip_size_kb": 64, 00:14:04.816 "state": "online", 00:14:04.816 "raid_level": "raid5f", 00:14:04.816 "superblock": true, 00:14:04.816 "num_base_bdevs": 3, 00:14:04.816 "num_base_bdevs_discovered": 2, 00:14:04.816 "num_base_bdevs_operational": 2, 00:14:04.816 "base_bdevs_list": [ 00:14:04.816 { 00:14:04.816 "name": null, 00:14:04.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.816 "is_configured": false, 00:14:04.816 "data_offset": 2048, 00:14:04.816 "data_size": 63488 00:14:04.816 }, 00:14:04.816 { 00:14:04.816 "name": "pt2", 00:14:04.816 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:04.816 "is_configured": true, 00:14:04.816 "data_offset": 2048, 00:14:04.816 "data_size": 63488 00:14:04.816 }, 00:14:04.816 { 00:14:04.816 "name": "pt3", 00:14:04.816 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:04.816 "is_configured": true, 00:14:04.816 "data_offset": 2048, 00:14:04.816 "data_size": 63488 00:14:04.816 } 00:14:04.816 ] 00:14:04.816 }' 00:14:04.816 03:14:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.816 03:14:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.077 03:14:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:05.077 03:14:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.077 03:14:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.077 03:14:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:05.077 03:14:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.077 03:14:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:05.077 03:14:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:05.077 03:14:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.077 03:14:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.077 03:14:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:05.077 [2024-11-18 03:14:08.572559] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:05.077 03:14:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.077 03:14:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 9e7c0070-bd8a-4d09-a99b-e2fd0b1b8ede '!=' 9e7c0070-bd8a-4d09-a99b-e2fd0b1b8ede ']' 00:14:05.077 03:14:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 91790 00:14:05.077 03:14:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 91790 ']' 00:14:05.077 03:14:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 91790 00:14:05.077 03:14:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:05.077 03:14:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:05.077 03:14:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91790 00:14:05.077 03:14:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:05.078 03:14:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:05.078 03:14:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91790' 00:14:05.078 killing process with pid 91790 00:14:05.078 03:14:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 91790 00:14:05.078 [2024-11-18 03:14:08.650912] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:05.078 [2024-11-18 03:14:08.651075] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:05.078 03:14:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 91790 00:14:05.078 [2024-11-18 03:14:08.651176] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:05.078 [2024-11-18 03:14:08.651190] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:14:05.338 [2024-11-18 03:14:08.684644] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:05.599 03:14:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:05.599 00:14:05.599 real 0m6.438s 00:14:05.599 user 0m10.804s 00:14:05.599 sys 0m1.335s 00:14:05.599 03:14:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:05.599 ************************************ 00:14:05.599 END TEST raid5f_superblock_test 00:14:05.599 ************************************ 00:14:05.599 03:14:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.599 03:14:08 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:05.599 03:14:08 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:14:05.599 03:14:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:05.599 03:14:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:05.599 03:14:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:05.599 ************************************ 00:14:05.599 START TEST raid5f_rebuild_test 00:14:05.599 ************************************ 00:14:05.599 03:14:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:14:05.599 03:14:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:05.599 03:14:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:05.599 03:14:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:05.599 03:14:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:05.599 03:14:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:05.599 03:14:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=92220 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 92220 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 92220 ']' 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:05.599 03:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.599 [2024-11-18 03:14:09.092833] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:05.599 [2024-11-18 03:14:09.093012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92220 ] 00:14:05.599 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:05.600 Zero copy mechanism will not be used. 00:14:05.859 [2024-11-18 03:14:09.253882] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.860 [2024-11-18 03:14:09.303512] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.860 [2024-11-18 03:14:09.345796] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:05.860 [2024-11-18 03:14:09.345912] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:06.431 03:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:06.431 03:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:14:06.431 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:06.431 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:06.431 03:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.431 03:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.431 BaseBdev1_malloc 00:14:06.431 03:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.431 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:06.431 03:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.431 03:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.431 [2024-11-18 03:14:09.935939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:06.431 [2024-11-18 03:14:09.936023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.431 [2024-11-18 03:14:09.936054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:06.431 [2024-11-18 03:14:09.936076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.431 [2024-11-18 03:14:09.938188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.431 [2024-11-18 03:14:09.938226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:06.431 BaseBdev1 00:14:06.431 03:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.431 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:06.431 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:06.431 03:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.431 03:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.431 BaseBdev2_malloc 00:14:06.431 03:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.431 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:06.431 03:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.431 03:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.431 [2024-11-18 03:14:09.975654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:06.431 [2024-11-18 03:14:09.975781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.431 [2024-11-18 03:14:09.975815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:06.431 [2024-11-18 03:14:09.975827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.431 [2024-11-18 03:14:09.978664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.431 [2024-11-18 03:14:09.978711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:06.431 BaseBdev2 00:14:06.431 03:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.431 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:06.431 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:06.431 03:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.431 03:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.431 BaseBdev3_malloc 00:14:06.431 03:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.431 03:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:06.431 03:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.431 03:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.431 [2024-11-18 03:14:10.004209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:06.431 [2024-11-18 03:14:10.004260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.431 [2024-11-18 03:14:10.004284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:06.431 [2024-11-18 03:14:10.004292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.692 [2024-11-18 03:14:10.006324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.692 [2024-11-18 03:14:10.006414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:06.692 BaseBdev3 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.692 spare_malloc 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.692 spare_delay 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.692 [2024-11-18 03:14:10.044682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:06.692 [2024-11-18 03:14:10.044732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.692 [2024-11-18 03:14:10.044754] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:06.692 [2024-11-18 03:14:10.044763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.692 [2024-11-18 03:14:10.046930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.692 [2024-11-18 03:14:10.046978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:06.692 spare 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.692 [2024-11-18 03:14:10.056722] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:06.692 [2024-11-18 03:14:10.058617] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:06.692 [2024-11-18 03:14:10.058683] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:06.692 [2024-11-18 03:14:10.058758] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:06.692 [2024-11-18 03:14:10.058769] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:06.692 [2024-11-18 03:14:10.059060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:06.692 [2024-11-18 03:14:10.059464] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:06.692 [2024-11-18 03:14:10.059487] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:06.692 [2024-11-18 03:14:10.059610] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.692 "name": "raid_bdev1", 00:14:06.692 "uuid": "e97590a3-d0ef-4cbd-8f30-c3b44df863a6", 00:14:06.692 "strip_size_kb": 64, 00:14:06.692 "state": "online", 00:14:06.692 "raid_level": "raid5f", 00:14:06.692 "superblock": false, 00:14:06.692 "num_base_bdevs": 3, 00:14:06.692 "num_base_bdevs_discovered": 3, 00:14:06.692 "num_base_bdevs_operational": 3, 00:14:06.692 "base_bdevs_list": [ 00:14:06.692 { 00:14:06.692 "name": "BaseBdev1", 00:14:06.692 "uuid": "351d76be-2a5a-58b8-ae53-90f934cc467c", 00:14:06.692 "is_configured": true, 00:14:06.692 "data_offset": 0, 00:14:06.692 "data_size": 65536 00:14:06.692 }, 00:14:06.692 { 00:14:06.692 "name": "BaseBdev2", 00:14:06.692 "uuid": "5ca31990-299a-5c92-8b22-010db21ae9e7", 00:14:06.692 "is_configured": true, 00:14:06.692 "data_offset": 0, 00:14:06.692 "data_size": 65536 00:14:06.692 }, 00:14:06.692 { 00:14:06.692 "name": "BaseBdev3", 00:14:06.692 "uuid": "1bb21681-873c-590e-b214-591827077686", 00:14:06.692 "is_configured": true, 00:14:06.692 "data_offset": 0, 00:14:06.692 "data_size": 65536 00:14:06.692 } 00:14:06.692 ] 00:14:06.692 }' 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.692 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.952 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:06.952 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:06.952 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.952 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.952 [2024-11-18 03:14:10.508415] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:06.952 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.213 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:14:07.213 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:07.213 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.213 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.213 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.213 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.213 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:07.213 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:07.213 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:07.213 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:07.213 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:07.213 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.213 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:07.213 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:07.213 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:07.213 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:07.213 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:07.213 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:07.213 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.213 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:07.213 [2024-11-18 03:14:10.775842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:07.473 /dev/nbd0 00:14:07.473 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:07.473 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:07.473 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:07.473 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:07.473 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:07.473 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:07.473 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:07.473 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:07.473 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:07.473 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:07.473 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:07.474 1+0 records in 00:14:07.474 1+0 records out 00:14:07.474 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474353 s, 8.6 MB/s 00:14:07.474 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.474 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:07.474 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.474 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:07.474 03:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:07.474 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.474 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.474 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:07.474 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:07.474 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:07.474 03:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:14:07.733 512+0 records in 00:14:07.733 512+0 records out 00:14:07.733 67108864 bytes (67 MB, 64 MiB) copied, 0.290021 s, 231 MB/s 00:14:07.733 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:07.733 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.733 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:07.733 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:07.733 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:07.733 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.733 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:07.994 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:07.994 [2024-11-18 03:14:11.364041] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.994 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:07.994 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:07.994 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.994 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.994 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:07.994 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:07.994 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.994 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:07.994 03:14:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.994 03:14:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.994 [2024-11-18 03:14:11.379716] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:07.994 03:14:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.994 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:07.994 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.994 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.994 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:07.994 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.994 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:07.994 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.994 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.994 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.994 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.994 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.994 03:14:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.994 03:14:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.994 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.994 03:14:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.994 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.994 "name": "raid_bdev1", 00:14:07.994 "uuid": "e97590a3-d0ef-4cbd-8f30-c3b44df863a6", 00:14:07.994 "strip_size_kb": 64, 00:14:07.994 "state": "online", 00:14:07.994 "raid_level": "raid5f", 00:14:07.994 "superblock": false, 00:14:07.994 "num_base_bdevs": 3, 00:14:07.994 "num_base_bdevs_discovered": 2, 00:14:07.994 "num_base_bdevs_operational": 2, 00:14:07.994 "base_bdevs_list": [ 00:14:07.994 { 00:14:07.994 "name": null, 00:14:07.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.994 "is_configured": false, 00:14:07.994 "data_offset": 0, 00:14:07.994 "data_size": 65536 00:14:07.994 }, 00:14:07.994 { 00:14:07.994 "name": "BaseBdev2", 00:14:07.994 "uuid": "5ca31990-299a-5c92-8b22-010db21ae9e7", 00:14:07.994 "is_configured": true, 00:14:07.994 "data_offset": 0, 00:14:07.994 "data_size": 65536 00:14:07.994 }, 00:14:07.994 { 00:14:07.994 "name": "BaseBdev3", 00:14:07.994 "uuid": "1bb21681-873c-590e-b214-591827077686", 00:14:07.994 "is_configured": true, 00:14:07.994 "data_offset": 0, 00:14:07.994 "data_size": 65536 00:14:07.994 } 00:14:07.994 ] 00:14:07.994 }' 00:14:07.994 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.994 03:14:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.255 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:08.255 03:14:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.255 03:14:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.255 [2024-11-18 03:14:11.791025] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:08.255 [2024-11-18 03:14:11.794878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b4e0 00:14:08.255 [2024-11-18 03:14:11.797113] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:08.255 03:14:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.255 03:14:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:09.638 03:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.638 03:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.638 03:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.638 03:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.638 03:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.638 03:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.638 03:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.638 03:14:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.638 03:14:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.638 03:14:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.638 03:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.638 "name": "raid_bdev1", 00:14:09.638 "uuid": "e97590a3-d0ef-4cbd-8f30-c3b44df863a6", 00:14:09.638 "strip_size_kb": 64, 00:14:09.638 "state": "online", 00:14:09.638 "raid_level": "raid5f", 00:14:09.638 "superblock": false, 00:14:09.638 "num_base_bdevs": 3, 00:14:09.638 "num_base_bdevs_discovered": 3, 00:14:09.638 "num_base_bdevs_operational": 3, 00:14:09.638 "process": { 00:14:09.638 "type": "rebuild", 00:14:09.638 "target": "spare", 00:14:09.638 "progress": { 00:14:09.638 "blocks": 20480, 00:14:09.638 "percent": 15 00:14:09.638 } 00:14:09.638 }, 00:14:09.638 "base_bdevs_list": [ 00:14:09.638 { 00:14:09.638 "name": "spare", 00:14:09.638 "uuid": "3a464b7f-3c88-5304-81cc-b085cada26e4", 00:14:09.638 "is_configured": true, 00:14:09.638 "data_offset": 0, 00:14:09.638 "data_size": 65536 00:14:09.638 }, 00:14:09.638 { 00:14:09.638 "name": "BaseBdev2", 00:14:09.638 "uuid": "5ca31990-299a-5c92-8b22-010db21ae9e7", 00:14:09.638 "is_configured": true, 00:14:09.638 "data_offset": 0, 00:14:09.638 "data_size": 65536 00:14:09.638 }, 00:14:09.638 { 00:14:09.638 "name": "BaseBdev3", 00:14:09.638 "uuid": "1bb21681-873c-590e-b214-591827077686", 00:14:09.638 "is_configured": true, 00:14:09.638 "data_offset": 0, 00:14:09.638 "data_size": 65536 00:14:09.638 } 00:14:09.638 ] 00:14:09.638 }' 00:14:09.638 03:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.638 03:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.638 03:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.638 03:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.638 03:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:09.638 03:14:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.638 03:14:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.638 [2024-11-18 03:14:12.964033] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:09.638 [2024-11-18 03:14:13.005010] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:09.638 [2024-11-18 03:14:13.005071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.638 [2024-11-18 03:14:13.005088] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:09.639 [2024-11-18 03:14:13.005098] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:09.639 03:14:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.639 03:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:09.639 03:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.639 03:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.639 03:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:09.639 03:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.639 03:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:09.639 03:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.639 03:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.639 03:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.639 03:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.639 03:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.639 03:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.639 03:14:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.639 03:14:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.639 03:14:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.639 03:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.639 "name": "raid_bdev1", 00:14:09.639 "uuid": "e97590a3-d0ef-4cbd-8f30-c3b44df863a6", 00:14:09.639 "strip_size_kb": 64, 00:14:09.639 "state": "online", 00:14:09.639 "raid_level": "raid5f", 00:14:09.639 "superblock": false, 00:14:09.639 "num_base_bdevs": 3, 00:14:09.639 "num_base_bdevs_discovered": 2, 00:14:09.639 "num_base_bdevs_operational": 2, 00:14:09.639 "base_bdevs_list": [ 00:14:09.639 { 00:14:09.639 "name": null, 00:14:09.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.639 "is_configured": false, 00:14:09.639 "data_offset": 0, 00:14:09.639 "data_size": 65536 00:14:09.639 }, 00:14:09.639 { 00:14:09.639 "name": "BaseBdev2", 00:14:09.639 "uuid": "5ca31990-299a-5c92-8b22-010db21ae9e7", 00:14:09.639 "is_configured": true, 00:14:09.639 "data_offset": 0, 00:14:09.639 "data_size": 65536 00:14:09.639 }, 00:14:09.639 { 00:14:09.639 "name": "BaseBdev3", 00:14:09.639 "uuid": "1bb21681-873c-590e-b214-591827077686", 00:14:09.639 "is_configured": true, 00:14:09.639 "data_offset": 0, 00:14:09.639 "data_size": 65536 00:14:09.639 } 00:14:09.639 ] 00:14:09.639 }' 00:14:09.639 03:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.639 03:14:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.899 03:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:09.899 03:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.899 03:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:09.899 03:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:09.899 03:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.899 03:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.899 03:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.899 03:14:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.899 03:14:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.899 03:14:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.899 03:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.899 "name": "raid_bdev1", 00:14:09.899 "uuid": "e97590a3-d0ef-4cbd-8f30-c3b44df863a6", 00:14:09.899 "strip_size_kb": 64, 00:14:09.899 "state": "online", 00:14:09.899 "raid_level": "raid5f", 00:14:09.899 "superblock": false, 00:14:09.899 "num_base_bdevs": 3, 00:14:09.899 "num_base_bdevs_discovered": 2, 00:14:09.899 "num_base_bdevs_operational": 2, 00:14:09.899 "base_bdevs_list": [ 00:14:09.899 { 00:14:09.899 "name": null, 00:14:09.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.899 "is_configured": false, 00:14:09.899 "data_offset": 0, 00:14:09.899 "data_size": 65536 00:14:09.899 }, 00:14:09.899 { 00:14:09.899 "name": "BaseBdev2", 00:14:09.899 "uuid": "5ca31990-299a-5c92-8b22-010db21ae9e7", 00:14:09.899 "is_configured": true, 00:14:09.899 "data_offset": 0, 00:14:09.899 "data_size": 65536 00:14:09.899 }, 00:14:09.899 { 00:14:09.899 "name": "BaseBdev3", 00:14:09.899 "uuid": "1bb21681-873c-590e-b214-591827077686", 00:14:09.899 "is_configured": true, 00:14:09.899 "data_offset": 0, 00:14:09.899 "data_size": 65536 00:14:09.899 } 00:14:09.899 ] 00:14:09.899 }' 00:14:09.899 03:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.159 03:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:10.159 03:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.160 03:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:10.160 03:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:10.160 03:14:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.160 03:14:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.160 [2024-11-18 03:14:13.541734] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:10.160 [2024-11-18 03:14:13.545463] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:14:10.160 [2024-11-18 03:14:13.547692] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:10.160 03:14:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.160 03:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:11.101 03:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.101 03:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.101 03:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.101 03:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.101 03:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.101 03:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.101 03:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.101 03:14:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.101 03:14:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.101 03:14:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.101 03:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.101 "name": "raid_bdev1", 00:14:11.101 "uuid": "e97590a3-d0ef-4cbd-8f30-c3b44df863a6", 00:14:11.101 "strip_size_kb": 64, 00:14:11.101 "state": "online", 00:14:11.101 "raid_level": "raid5f", 00:14:11.101 "superblock": false, 00:14:11.101 "num_base_bdevs": 3, 00:14:11.101 "num_base_bdevs_discovered": 3, 00:14:11.101 "num_base_bdevs_operational": 3, 00:14:11.101 "process": { 00:14:11.101 "type": "rebuild", 00:14:11.101 "target": "spare", 00:14:11.101 "progress": { 00:14:11.101 "blocks": 20480, 00:14:11.101 "percent": 15 00:14:11.101 } 00:14:11.101 }, 00:14:11.101 "base_bdevs_list": [ 00:14:11.101 { 00:14:11.101 "name": "spare", 00:14:11.101 "uuid": "3a464b7f-3c88-5304-81cc-b085cada26e4", 00:14:11.101 "is_configured": true, 00:14:11.101 "data_offset": 0, 00:14:11.101 "data_size": 65536 00:14:11.101 }, 00:14:11.101 { 00:14:11.101 "name": "BaseBdev2", 00:14:11.101 "uuid": "5ca31990-299a-5c92-8b22-010db21ae9e7", 00:14:11.101 "is_configured": true, 00:14:11.101 "data_offset": 0, 00:14:11.101 "data_size": 65536 00:14:11.101 }, 00:14:11.101 { 00:14:11.101 "name": "BaseBdev3", 00:14:11.101 "uuid": "1bb21681-873c-590e-b214-591827077686", 00:14:11.101 "is_configured": true, 00:14:11.101 "data_offset": 0, 00:14:11.101 "data_size": 65536 00:14:11.101 } 00:14:11.101 ] 00:14:11.101 }' 00:14:11.101 03:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.101 03:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:11.101 03:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.101 03:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:11.101 03:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:11.101 03:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:11.101 03:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:11.101 03:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=448 00:14:11.101 03:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:11.101 03:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.101 03:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.101 03:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.101 03:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.101 03:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.101 03:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.101 03:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.101 03:14:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.101 03:14:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.362 03:14:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.362 03:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.362 "name": "raid_bdev1", 00:14:11.362 "uuid": "e97590a3-d0ef-4cbd-8f30-c3b44df863a6", 00:14:11.362 "strip_size_kb": 64, 00:14:11.362 "state": "online", 00:14:11.362 "raid_level": "raid5f", 00:14:11.362 "superblock": false, 00:14:11.362 "num_base_bdevs": 3, 00:14:11.362 "num_base_bdevs_discovered": 3, 00:14:11.362 "num_base_bdevs_operational": 3, 00:14:11.362 "process": { 00:14:11.362 "type": "rebuild", 00:14:11.362 "target": "spare", 00:14:11.362 "progress": { 00:14:11.362 "blocks": 22528, 00:14:11.362 "percent": 17 00:14:11.362 } 00:14:11.362 }, 00:14:11.362 "base_bdevs_list": [ 00:14:11.362 { 00:14:11.362 "name": "spare", 00:14:11.362 "uuid": "3a464b7f-3c88-5304-81cc-b085cada26e4", 00:14:11.362 "is_configured": true, 00:14:11.362 "data_offset": 0, 00:14:11.362 "data_size": 65536 00:14:11.362 }, 00:14:11.362 { 00:14:11.362 "name": "BaseBdev2", 00:14:11.362 "uuid": "5ca31990-299a-5c92-8b22-010db21ae9e7", 00:14:11.362 "is_configured": true, 00:14:11.362 "data_offset": 0, 00:14:11.362 "data_size": 65536 00:14:11.362 }, 00:14:11.362 { 00:14:11.362 "name": "BaseBdev3", 00:14:11.362 "uuid": "1bb21681-873c-590e-b214-591827077686", 00:14:11.362 "is_configured": true, 00:14:11.362 "data_offset": 0, 00:14:11.362 "data_size": 65536 00:14:11.362 } 00:14:11.362 ] 00:14:11.362 }' 00:14:11.362 03:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.362 03:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:11.362 03:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.362 03:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:11.362 03:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:12.303 03:14:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:12.303 03:14:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.303 03:14:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.303 03:14:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:12.303 03:14:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:12.303 03:14:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.303 03:14:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.303 03:14:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.303 03:14:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.303 03:14:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.303 03:14:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.303 03:14:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.303 "name": "raid_bdev1", 00:14:12.303 "uuid": "e97590a3-d0ef-4cbd-8f30-c3b44df863a6", 00:14:12.303 "strip_size_kb": 64, 00:14:12.303 "state": "online", 00:14:12.303 "raid_level": "raid5f", 00:14:12.303 "superblock": false, 00:14:12.303 "num_base_bdevs": 3, 00:14:12.303 "num_base_bdevs_discovered": 3, 00:14:12.303 "num_base_bdevs_operational": 3, 00:14:12.303 "process": { 00:14:12.303 "type": "rebuild", 00:14:12.303 "target": "spare", 00:14:12.303 "progress": { 00:14:12.303 "blocks": 45056, 00:14:12.303 "percent": 34 00:14:12.303 } 00:14:12.303 }, 00:14:12.303 "base_bdevs_list": [ 00:14:12.303 { 00:14:12.303 "name": "spare", 00:14:12.303 "uuid": "3a464b7f-3c88-5304-81cc-b085cada26e4", 00:14:12.303 "is_configured": true, 00:14:12.303 "data_offset": 0, 00:14:12.303 "data_size": 65536 00:14:12.303 }, 00:14:12.303 { 00:14:12.303 "name": "BaseBdev2", 00:14:12.303 "uuid": "5ca31990-299a-5c92-8b22-010db21ae9e7", 00:14:12.303 "is_configured": true, 00:14:12.303 "data_offset": 0, 00:14:12.303 "data_size": 65536 00:14:12.303 }, 00:14:12.303 { 00:14:12.303 "name": "BaseBdev3", 00:14:12.303 "uuid": "1bb21681-873c-590e-b214-591827077686", 00:14:12.303 "is_configured": true, 00:14:12.303 "data_offset": 0, 00:14:12.303 "data_size": 65536 00:14:12.303 } 00:14:12.303 ] 00:14:12.303 }' 00:14:12.303 03:14:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.564 03:14:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:12.564 03:14:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.564 03:14:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.564 03:14:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:13.504 03:14:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:13.504 03:14:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:13.504 03:14:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.504 03:14:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:13.504 03:14:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:13.504 03:14:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.504 03:14:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.504 03:14:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.504 03:14:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.504 03:14:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.504 03:14:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.504 03:14:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.504 "name": "raid_bdev1", 00:14:13.504 "uuid": "e97590a3-d0ef-4cbd-8f30-c3b44df863a6", 00:14:13.504 "strip_size_kb": 64, 00:14:13.504 "state": "online", 00:14:13.504 "raid_level": "raid5f", 00:14:13.504 "superblock": false, 00:14:13.504 "num_base_bdevs": 3, 00:14:13.504 "num_base_bdevs_discovered": 3, 00:14:13.504 "num_base_bdevs_operational": 3, 00:14:13.504 "process": { 00:14:13.504 "type": "rebuild", 00:14:13.504 "target": "spare", 00:14:13.504 "progress": { 00:14:13.504 "blocks": 69632, 00:14:13.504 "percent": 53 00:14:13.504 } 00:14:13.504 }, 00:14:13.504 "base_bdevs_list": [ 00:14:13.504 { 00:14:13.504 "name": "spare", 00:14:13.504 "uuid": "3a464b7f-3c88-5304-81cc-b085cada26e4", 00:14:13.504 "is_configured": true, 00:14:13.504 "data_offset": 0, 00:14:13.504 "data_size": 65536 00:14:13.504 }, 00:14:13.504 { 00:14:13.504 "name": "BaseBdev2", 00:14:13.504 "uuid": "5ca31990-299a-5c92-8b22-010db21ae9e7", 00:14:13.504 "is_configured": true, 00:14:13.504 "data_offset": 0, 00:14:13.504 "data_size": 65536 00:14:13.504 }, 00:14:13.504 { 00:14:13.504 "name": "BaseBdev3", 00:14:13.504 "uuid": "1bb21681-873c-590e-b214-591827077686", 00:14:13.504 "is_configured": true, 00:14:13.504 "data_offset": 0, 00:14:13.504 "data_size": 65536 00:14:13.504 } 00:14:13.504 ] 00:14:13.504 }' 00:14:13.504 03:14:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.504 03:14:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:13.504 03:14:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.763 03:14:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:13.763 03:14:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:14.704 03:14:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:14.704 03:14:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:14.704 03:14:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.704 03:14:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:14.704 03:14:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:14.704 03:14:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.704 03:14:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.704 03:14:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.704 03:14:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.704 03:14:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.704 03:14:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.704 03:14:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.704 "name": "raid_bdev1", 00:14:14.704 "uuid": "e97590a3-d0ef-4cbd-8f30-c3b44df863a6", 00:14:14.704 "strip_size_kb": 64, 00:14:14.704 "state": "online", 00:14:14.704 "raid_level": "raid5f", 00:14:14.704 "superblock": false, 00:14:14.704 "num_base_bdevs": 3, 00:14:14.704 "num_base_bdevs_discovered": 3, 00:14:14.704 "num_base_bdevs_operational": 3, 00:14:14.704 "process": { 00:14:14.704 "type": "rebuild", 00:14:14.704 "target": "spare", 00:14:14.704 "progress": { 00:14:14.704 "blocks": 92160, 00:14:14.704 "percent": 70 00:14:14.704 } 00:14:14.704 }, 00:14:14.704 "base_bdevs_list": [ 00:14:14.704 { 00:14:14.704 "name": "spare", 00:14:14.704 "uuid": "3a464b7f-3c88-5304-81cc-b085cada26e4", 00:14:14.704 "is_configured": true, 00:14:14.704 "data_offset": 0, 00:14:14.704 "data_size": 65536 00:14:14.704 }, 00:14:14.704 { 00:14:14.704 "name": "BaseBdev2", 00:14:14.704 "uuid": "5ca31990-299a-5c92-8b22-010db21ae9e7", 00:14:14.704 "is_configured": true, 00:14:14.704 "data_offset": 0, 00:14:14.704 "data_size": 65536 00:14:14.704 }, 00:14:14.704 { 00:14:14.704 "name": "BaseBdev3", 00:14:14.704 "uuid": "1bb21681-873c-590e-b214-591827077686", 00:14:14.704 "is_configured": true, 00:14:14.704 "data_offset": 0, 00:14:14.704 "data_size": 65536 00:14:14.704 } 00:14:14.704 ] 00:14:14.704 }' 00:14:14.704 03:14:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.704 03:14:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:14.704 03:14:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.704 03:14:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:14.704 03:14:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:16.087 03:14:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:16.087 03:14:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.087 03:14:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.087 03:14:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.087 03:14:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.087 03:14:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.087 03:14:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.087 03:14:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.087 03:14:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.087 03:14:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.087 03:14:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.087 03:14:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.087 "name": "raid_bdev1", 00:14:16.087 "uuid": "e97590a3-d0ef-4cbd-8f30-c3b44df863a6", 00:14:16.087 "strip_size_kb": 64, 00:14:16.087 "state": "online", 00:14:16.087 "raid_level": "raid5f", 00:14:16.087 "superblock": false, 00:14:16.087 "num_base_bdevs": 3, 00:14:16.087 "num_base_bdevs_discovered": 3, 00:14:16.087 "num_base_bdevs_operational": 3, 00:14:16.087 "process": { 00:14:16.087 "type": "rebuild", 00:14:16.087 "target": "spare", 00:14:16.087 "progress": { 00:14:16.087 "blocks": 114688, 00:14:16.087 "percent": 87 00:14:16.087 } 00:14:16.087 }, 00:14:16.087 "base_bdevs_list": [ 00:14:16.087 { 00:14:16.087 "name": "spare", 00:14:16.087 "uuid": "3a464b7f-3c88-5304-81cc-b085cada26e4", 00:14:16.087 "is_configured": true, 00:14:16.087 "data_offset": 0, 00:14:16.087 "data_size": 65536 00:14:16.087 }, 00:14:16.087 { 00:14:16.087 "name": "BaseBdev2", 00:14:16.087 "uuid": "5ca31990-299a-5c92-8b22-010db21ae9e7", 00:14:16.087 "is_configured": true, 00:14:16.087 "data_offset": 0, 00:14:16.087 "data_size": 65536 00:14:16.087 }, 00:14:16.087 { 00:14:16.087 "name": "BaseBdev3", 00:14:16.087 "uuid": "1bb21681-873c-590e-b214-591827077686", 00:14:16.087 "is_configured": true, 00:14:16.087 "data_offset": 0, 00:14:16.087 "data_size": 65536 00:14:16.087 } 00:14:16.087 ] 00:14:16.087 }' 00:14:16.087 03:14:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.087 03:14:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.088 03:14:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.088 03:14:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.088 03:14:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:16.658 [2024-11-18 03:14:19.986660] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:16.658 [2024-11-18 03:14:19.986831] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:16.658 [2024-11-18 03:14:19.986928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.917 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:16.918 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.918 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.918 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.918 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.918 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.918 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.918 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.918 03:14:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.918 03:14:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.918 03:14:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.918 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.918 "name": "raid_bdev1", 00:14:16.918 "uuid": "e97590a3-d0ef-4cbd-8f30-c3b44df863a6", 00:14:16.918 "strip_size_kb": 64, 00:14:16.918 "state": "online", 00:14:16.918 "raid_level": "raid5f", 00:14:16.918 "superblock": false, 00:14:16.918 "num_base_bdevs": 3, 00:14:16.918 "num_base_bdevs_discovered": 3, 00:14:16.918 "num_base_bdevs_operational": 3, 00:14:16.918 "base_bdevs_list": [ 00:14:16.918 { 00:14:16.918 "name": "spare", 00:14:16.918 "uuid": "3a464b7f-3c88-5304-81cc-b085cada26e4", 00:14:16.918 "is_configured": true, 00:14:16.918 "data_offset": 0, 00:14:16.918 "data_size": 65536 00:14:16.918 }, 00:14:16.918 { 00:14:16.918 "name": "BaseBdev2", 00:14:16.918 "uuid": "5ca31990-299a-5c92-8b22-010db21ae9e7", 00:14:16.918 "is_configured": true, 00:14:16.918 "data_offset": 0, 00:14:16.918 "data_size": 65536 00:14:16.918 }, 00:14:16.918 { 00:14:16.918 "name": "BaseBdev3", 00:14:16.918 "uuid": "1bb21681-873c-590e-b214-591827077686", 00:14:16.918 "is_configured": true, 00:14:16.918 "data_offset": 0, 00:14:16.918 "data_size": 65536 00:14:16.918 } 00:14:16.918 ] 00:14:16.918 }' 00:14:16.918 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.918 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:16.918 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.178 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:17.178 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:17.178 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:17.178 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.178 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:17.178 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:17.178 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.178 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.178 03:14:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.178 03:14:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.178 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.178 03:14:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.178 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.178 "name": "raid_bdev1", 00:14:17.178 "uuid": "e97590a3-d0ef-4cbd-8f30-c3b44df863a6", 00:14:17.178 "strip_size_kb": 64, 00:14:17.178 "state": "online", 00:14:17.178 "raid_level": "raid5f", 00:14:17.178 "superblock": false, 00:14:17.178 "num_base_bdevs": 3, 00:14:17.178 "num_base_bdevs_discovered": 3, 00:14:17.178 "num_base_bdevs_operational": 3, 00:14:17.178 "base_bdevs_list": [ 00:14:17.178 { 00:14:17.178 "name": "spare", 00:14:17.178 "uuid": "3a464b7f-3c88-5304-81cc-b085cada26e4", 00:14:17.178 "is_configured": true, 00:14:17.178 "data_offset": 0, 00:14:17.178 "data_size": 65536 00:14:17.178 }, 00:14:17.178 { 00:14:17.178 "name": "BaseBdev2", 00:14:17.178 "uuid": "5ca31990-299a-5c92-8b22-010db21ae9e7", 00:14:17.178 "is_configured": true, 00:14:17.178 "data_offset": 0, 00:14:17.178 "data_size": 65536 00:14:17.178 }, 00:14:17.178 { 00:14:17.178 "name": "BaseBdev3", 00:14:17.178 "uuid": "1bb21681-873c-590e-b214-591827077686", 00:14:17.178 "is_configured": true, 00:14:17.178 "data_offset": 0, 00:14:17.178 "data_size": 65536 00:14:17.178 } 00:14:17.178 ] 00:14:17.178 }' 00:14:17.178 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.178 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:17.178 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.178 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:17.179 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:17.179 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.179 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.179 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:17.179 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.179 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:17.179 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.179 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.179 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.179 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.179 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.179 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.179 03:14:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.179 03:14:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.179 03:14:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.179 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.179 "name": "raid_bdev1", 00:14:17.179 "uuid": "e97590a3-d0ef-4cbd-8f30-c3b44df863a6", 00:14:17.179 "strip_size_kb": 64, 00:14:17.179 "state": "online", 00:14:17.179 "raid_level": "raid5f", 00:14:17.179 "superblock": false, 00:14:17.179 "num_base_bdevs": 3, 00:14:17.179 "num_base_bdevs_discovered": 3, 00:14:17.179 "num_base_bdevs_operational": 3, 00:14:17.179 "base_bdevs_list": [ 00:14:17.179 { 00:14:17.179 "name": "spare", 00:14:17.179 "uuid": "3a464b7f-3c88-5304-81cc-b085cada26e4", 00:14:17.179 "is_configured": true, 00:14:17.179 "data_offset": 0, 00:14:17.179 "data_size": 65536 00:14:17.179 }, 00:14:17.179 { 00:14:17.179 "name": "BaseBdev2", 00:14:17.179 "uuid": "5ca31990-299a-5c92-8b22-010db21ae9e7", 00:14:17.179 "is_configured": true, 00:14:17.179 "data_offset": 0, 00:14:17.179 "data_size": 65536 00:14:17.179 }, 00:14:17.179 { 00:14:17.179 "name": "BaseBdev3", 00:14:17.179 "uuid": "1bb21681-873c-590e-b214-591827077686", 00:14:17.179 "is_configured": true, 00:14:17.179 "data_offset": 0, 00:14:17.179 "data_size": 65536 00:14:17.179 } 00:14:17.179 ] 00:14:17.179 }' 00:14:17.179 03:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.179 03:14:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.750 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:17.750 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.750 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.750 [2024-11-18 03:14:21.102120] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:17.750 [2024-11-18 03:14:21.102154] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:17.750 [2024-11-18 03:14:21.102245] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:17.750 [2024-11-18 03:14:21.102333] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:17.750 [2024-11-18 03:14:21.102344] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:17.750 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.750 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.750 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:17.750 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.750 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.750 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.750 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:17.750 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:17.750 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:17.750 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:17.750 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:17.750 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:17.750 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:17.750 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:17.750 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:17.750 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:17.750 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:17.750 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:17.750 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:18.010 /dev/nbd0 00:14:18.010 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:18.010 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:18.010 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:18.010 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:18.010 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:18.010 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:18.010 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:18.010 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:18.010 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:18.010 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:18.010 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:18.010 1+0 records in 00:14:18.010 1+0 records out 00:14:18.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000513686 s, 8.0 MB/s 00:14:18.010 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:18.010 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:18.010 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:18.010 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:18.010 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:18.010 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:18.010 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:18.010 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:18.271 /dev/nbd1 00:14:18.271 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:18.271 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:18.271 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:18.271 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:18.271 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:18.271 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:18.271 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:18.271 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:18.271 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:18.271 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:18.271 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:18.271 1+0 records in 00:14:18.271 1+0 records out 00:14:18.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227517 s, 18.0 MB/s 00:14:18.271 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:18.271 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:18.271 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:18.271 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:18.271 03:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:18.271 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:18.271 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:18.271 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:18.271 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:18.271 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:18.271 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:18.271 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:18.271 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:18.271 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:18.271 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:18.531 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:18.531 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:18.531 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:18.531 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:18.531 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:18.531 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:18.531 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:18.532 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:18.532 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:18.532 03:14:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:18.792 03:14:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:18.792 03:14:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:18.792 03:14:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:18.792 03:14:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:18.792 03:14:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:18.792 03:14:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:18.792 03:14:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:18.792 03:14:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:18.792 03:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:18.792 03:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 92220 00:14:18.792 03:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 92220 ']' 00:14:18.792 03:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 92220 00:14:18.792 03:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:14:18.792 03:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:18.792 03:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92220 00:14:18.792 03:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:18.792 killing process with pid 92220 00:14:18.792 Received shutdown signal, test time was about 60.000000 seconds 00:14:18.792 00:14:18.792 Latency(us) 00:14:18.792 [2024-11-18T03:14:22.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.792 [2024-11-18T03:14:22.369Z] =================================================================================================================== 00:14:18.792 [2024-11-18T03:14:22.369Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:18.792 03:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:18.792 03:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92220' 00:14:18.792 03:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 92220 00:14:18.792 [2024-11-18 03:14:22.172252] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:18.792 03:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 92220 00:14:18.792 [2024-11-18 03:14:22.213541] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:19.053 00:14:19.053 real 0m13.444s 00:14:19.053 user 0m16.812s 00:14:19.053 sys 0m1.877s 00:14:19.053 ************************************ 00:14:19.053 END TEST raid5f_rebuild_test 00:14:19.053 ************************************ 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.053 03:14:22 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:14:19.053 03:14:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:19.053 03:14:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:19.053 03:14:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:19.053 ************************************ 00:14:19.053 START TEST raid5f_rebuild_test_sb 00:14:19.053 ************************************ 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=92643 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 92643 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 92643 ']' 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:19.053 03:14:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.053 [2024-11-18 03:14:22.606722] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:19.053 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:19.053 Zero copy mechanism will not be used. 00:14:19.053 [2024-11-18 03:14:22.606950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92643 ] 00:14:19.314 [2024-11-18 03:14:22.747780] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.314 [2024-11-18 03:14:22.796606] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.314 [2024-11-18 03:14:22.838711] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.314 [2024-11-18 03:14:22.838847] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.884 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:19.884 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:19.884 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:19.884 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:19.884 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.884 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.884 BaseBdev1_malloc 00:14:20.145 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.145 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:20.145 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.145 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.145 [2024-11-18 03:14:23.464885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:20.145 [2024-11-18 03:14:23.464969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.145 [2024-11-18 03:14:23.464996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:20.145 [2024-11-18 03:14:23.465017] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.145 [2024-11-18 03:14:23.467120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.145 [2024-11-18 03:14:23.467159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:20.145 BaseBdev1 00:14:20.145 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.145 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:20.145 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:20.145 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.145 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.145 BaseBdev2_malloc 00:14:20.145 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.145 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:20.145 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.145 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.145 [2024-11-18 03:14:23.508913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:20.145 [2024-11-18 03:14:23.509075] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.145 [2024-11-18 03:14:23.509113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:20.145 [2024-11-18 03:14:23.509127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.145 [2024-11-18 03:14:23.512164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.145 [2024-11-18 03:14:23.512270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:20.145 BaseBdev2 00:14:20.145 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.145 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:20.145 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:20.145 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.145 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.145 BaseBdev3_malloc 00:14:20.145 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.145 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:20.145 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.145 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.145 [2024-11-18 03:14:23.537723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:20.145 [2024-11-18 03:14:23.537776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.145 [2024-11-18 03:14:23.537802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:20.146 [2024-11-18 03:14:23.537810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.146 [2024-11-18 03:14:23.539862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.146 [2024-11-18 03:14:23.539949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:20.146 BaseBdev3 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.146 spare_malloc 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.146 spare_delay 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.146 [2024-11-18 03:14:23.578202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:20.146 [2024-11-18 03:14:23.578295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.146 [2024-11-18 03:14:23.578340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:20.146 [2024-11-18 03:14:23.578348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.146 [2024-11-18 03:14:23.580494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.146 [2024-11-18 03:14:23.580528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:20.146 spare 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.146 [2024-11-18 03:14:23.590248] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:20.146 [2024-11-18 03:14:23.592087] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:20.146 [2024-11-18 03:14:23.592154] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:20.146 [2024-11-18 03:14:23.592308] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:20.146 [2024-11-18 03:14:23.592322] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:20.146 [2024-11-18 03:14:23.592558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:20.146 [2024-11-18 03:14:23.592939] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:20.146 [2024-11-18 03:14:23.592954] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:20.146 [2024-11-18 03:14:23.593085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.146 "name": "raid_bdev1", 00:14:20.146 "uuid": "c491b7ad-b0d6-4ff9-9645-4fa933350e39", 00:14:20.146 "strip_size_kb": 64, 00:14:20.146 "state": "online", 00:14:20.146 "raid_level": "raid5f", 00:14:20.146 "superblock": true, 00:14:20.146 "num_base_bdevs": 3, 00:14:20.146 "num_base_bdevs_discovered": 3, 00:14:20.146 "num_base_bdevs_operational": 3, 00:14:20.146 "base_bdevs_list": [ 00:14:20.146 { 00:14:20.146 "name": "BaseBdev1", 00:14:20.146 "uuid": "1c59be22-4e61-53aa-8d9a-78ad3301f6cb", 00:14:20.146 "is_configured": true, 00:14:20.146 "data_offset": 2048, 00:14:20.146 "data_size": 63488 00:14:20.146 }, 00:14:20.146 { 00:14:20.146 "name": "BaseBdev2", 00:14:20.146 "uuid": "76201b72-1214-599c-93b6-3591f60c1182", 00:14:20.146 "is_configured": true, 00:14:20.146 "data_offset": 2048, 00:14:20.146 "data_size": 63488 00:14:20.146 }, 00:14:20.146 { 00:14:20.146 "name": "BaseBdev3", 00:14:20.146 "uuid": "36f5484f-53ac-5db0-8add-faa9876fb54f", 00:14:20.146 "is_configured": true, 00:14:20.146 "data_offset": 2048, 00:14:20.146 "data_size": 63488 00:14:20.146 } 00:14:20.146 ] 00:14:20.146 }' 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.146 03:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.717 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:20.717 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:20.717 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.717 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.717 [2024-11-18 03:14:24.041879] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:20.717 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.717 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:14:20.717 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.717 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:20.717 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.717 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.717 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.717 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:20.717 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:20.717 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:20.717 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:20.717 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:20.717 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:20.717 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:20.717 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:20.717 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:20.717 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:20.717 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:20.717 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:20.717 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:20.717 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:20.977 [2024-11-18 03:14:24.313282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:20.977 /dev/nbd0 00:14:20.977 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:20.977 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:20.977 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:20.977 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:20.977 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:20.977 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:20.977 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:20.977 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:20.977 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:20.977 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:20.977 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:20.977 1+0 records in 00:14:20.977 1+0 records out 00:14:20.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000615714 s, 6.7 MB/s 00:14:20.977 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.977 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:20.977 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.977 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:20.977 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:20.977 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:20.977 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:20.977 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:20.977 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:20.977 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:20.977 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:14:21.237 496+0 records in 00:14:21.237 496+0 records out 00:14:21.237 65011712 bytes (65 MB, 62 MiB) copied, 0.2777 s, 234 MB/s 00:14:21.237 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:21.237 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:21.237 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:21.237 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:21.237 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:21.237 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:21.237 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:21.498 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:21.498 [2024-11-18 03:14:24.871099] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.498 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:21.498 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:21.498 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:21.498 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:21.498 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:21.498 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:21.498 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:21.498 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:21.498 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.498 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.498 [2024-11-18 03:14:24.887179] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:21.498 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.498 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:21.498 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:21.498 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.498 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:21.498 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.498 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:21.498 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.498 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.498 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.498 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.498 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.498 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.498 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.498 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.498 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.498 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.498 "name": "raid_bdev1", 00:14:21.498 "uuid": "c491b7ad-b0d6-4ff9-9645-4fa933350e39", 00:14:21.498 "strip_size_kb": 64, 00:14:21.498 "state": "online", 00:14:21.498 "raid_level": "raid5f", 00:14:21.498 "superblock": true, 00:14:21.498 "num_base_bdevs": 3, 00:14:21.498 "num_base_bdevs_discovered": 2, 00:14:21.498 "num_base_bdevs_operational": 2, 00:14:21.498 "base_bdevs_list": [ 00:14:21.498 { 00:14:21.498 "name": null, 00:14:21.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.498 "is_configured": false, 00:14:21.498 "data_offset": 0, 00:14:21.498 "data_size": 63488 00:14:21.498 }, 00:14:21.498 { 00:14:21.498 "name": "BaseBdev2", 00:14:21.498 "uuid": "76201b72-1214-599c-93b6-3591f60c1182", 00:14:21.498 "is_configured": true, 00:14:21.498 "data_offset": 2048, 00:14:21.498 "data_size": 63488 00:14:21.498 }, 00:14:21.498 { 00:14:21.498 "name": "BaseBdev3", 00:14:21.498 "uuid": "36f5484f-53ac-5db0-8add-faa9876fb54f", 00:14:21.498 "is_configured": true, 00:14:21.498 "data_offset": 2048, 00:14:21.498 "data_size": 63488 00:14:21.498 } 00:14:21.498 ] 00:14:21.498 }' 00:14:21.498 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.498 03:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.762 03:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:21.762 03:14:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.762 03:14:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.762 [2024-11-18 03:14:25.314477] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:21.762 [2024-11-18 03:14:25.318245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028de0 00:14:21.762 [2024-11-18 03:14:25.320469] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:21.762 03:14:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.762 03:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.145 "name": "raid_bdev1", 00:14:23.145 "uuid": "c491b7ad-b0d6-4ff9-9645-4fa933350e39", 00:14:23.145 "strip_size_kb": 64, 00:14:23.145 "state": "online", 00:14:23.145 "raid_level": "raid5f", 00:14:23.145 "superblock": true, 00:14:23.145 "num_base_bdevs": 3, 00:14:23.145 "num_base_bdevs_discovered": 3, 00:14:23.145 "num_base_bdevs_operational": 3, 00:14:23.145 "process": { 00:14:23.145 "type": "rebuild", 00:14:23.145 "target": "spare", 00:14:23.145 "progress": { 00:14:23.145 "blocks": 20480, 00:14:23.145 "percent": 16 00:14:23.145 } 00:14:23.145 }, 00:14:23.145 "base_bdevs_list": [ 00:14:23.145 { 00:14:23.145 "name": "spare", 00:14:23.145 "uuid": "b771c45c-f8a0-542f-8b60-7c817294c9a6", 00:14:23.145 "is_configured": true, 00:14:23.145 "data_offset": 2048, 00:14:23.145 "data_size": 63488 00:14:23.145 }, 00:14:23.145 { 00:14:23.145 "name": "BaseBdev2", 00:14:23.145 "uuid": "76201b72-1214-599c-93b6-3591f60c1182", 00:14:23.145 "is_configured": true, 00:14:23.145 "data_offset": 2048, 00:14:23.145 "data_size": 63488 00:14:23.145 }, 00:14:23.145 { 00:14:23.145 "name": "BaseBdev3", 00:14:23.145 "uuid": "36f5484f-53ac-5db0-8add-faa9876fb54f", 00:14:23.145 "is_configured": true, 00:14:23.145 "data_offset": 2048, 00:14:23.145 "data_size": 63488 00:14:23.145 } 00:14:23.145 ] 00:14:23.145 }' 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.145 [2024-11-18 03:14:26.484014] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:23.145 [2024-11-18 03:14:26.528127] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:23.145 [2024-11-18 03:14:26.528190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.145 [2024-11-18 03:14:26.528205] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:23.145 [2024-11-18 03:14:26.528218] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.145 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.145 "name": "raid_bdev1", 00:14:23.145 "uuid": "c491b7ad-b0d6-4ff9-9645-4fa933350e39", 00:14:23.145 "strip_size_kb": 64, 00:14:23.146 "state": "online", 00:14:23.146 "raid_level": "raid5f", 00:14:23.146 "superblock": true, 00:14:23.146 "num_base_bdevs": 3, 00:14:23.146 "num_base_bdevs_discovered": 2, 00:14:23.146 "num_base_bdevs_operational": 2, 00:14:23.146 "base_bdevs_list": [ 00:14:23.146 { 00:14:23.146 "name": null, 00:14:23.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.146 "is_configured": false, 00:14:23.146 "data_offset": 0, 00:14:23.146 "data_size": 63488 00:14:23.146 }, 00:14:23.146 { 00:14:23.146 "name": "BaseBdev2", 00:14:23.146 "uuid": "76201b72-1214-599c-93b6-3591f60c1182", 00:14:23.146 "is_configured": true, 00:14:23.146 "data_offset": 2048, 00:14:23.146 "data_size": 63488 00:14:23.146 }, 00:14:23.146 { 00:14:23.146 "name": "BaseBdev3", 00:14:23.146 "uuid": "36f5484f-53ac-5db0-8add-faa9876fb54f", 00:14:23.146 "is_configured": true, 00:14:23.146 "data_offset": 2048, 00:14:23.146 "data_size": 63488 00:14:23.146 } 00:14:23.146 ] 00:14:23.146 }' 00:14:23.146 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.146 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.406 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:23.406 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.406 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:23.406 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:23.406 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.406 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.406 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.406 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.406 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.406 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.666 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.666 "name": "raid_bdev1", 00:14:23.666 "uuid": "c491b7ad-b0d6-4ff9-9645-4fa933350e39", 00:14:23.666 "strip_size_kb": 64, 00:14:23.666 "state": "online", 00:14:23.666 "raid_level": "raid5f", 00:14:23.666 "superblock": true, 00:14:23.666 "num_base_bdevs": 3, 00:14:23.666 "num_base_bdevs_discovered": 2, 00:14:23.666 "num_base_bdevs_operational": 2, 00:14:23.666 "base_bdevs_list": [ 00:14:23.666 { 00:14:23.666 "name": null, 00:14:23.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.666 "is_configured": false, 00:14:23.666 "data_offset": 0, 00:14:23.666 "data_size": 63488 00:14:23.666 }, 00:14:23.666 { 00:14:23.666 "name": "BaseBdev2", 00:14:23.666 "uuid": "76201b72-1214-599c-93b6-3591f60c1182", 00:14:23.666 "is_configured": true, 00:14:23.666 "data_offset": 2048, 00:14:23.666 "data_size": 63488 00:14:23.666 }, 00:14:23.666 { 00:14:23.666 "name": "BaseBdev3", 00:14:23.666 "uuid": "36f5484f-53ac-5db0-8add-faa9876fb54f", 00:14:23.666 "is_configured": true, 00:14:23.666 "data_offset": 2048, 00:14:23.666 "data_size": 63488 00:14:23.666 } 00:14:23.666 ] 00:14:23.666 }' 00:14:23.666 03:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.666 03:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:23.666 03:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.666 03:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:23.666 03:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:23.666 03:14:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.666 03:14:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.666 [2024-11-18 03:14:27.076716] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:23.666 [2024-11-18 03:14:27.080405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028eb0 00:14:23.666 [2024-11-18 03:14:27.082591] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:23.666 03:14:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.666 03:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:24.607 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.607 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.607 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.607 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.607 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.607 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.607 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.607 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.607 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.607 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.607 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.607 "name": "raid_bdev1", 00:14:24.607 "uuid": "c491b7ad-b0d6-4ff9-9645-4fa933350e39", 00:14:24.607 "strip_size_kb": 64, 00:14:24.607 "state": "online", 00:14:24.607 "raid_level": "raid5f", 00:14:24.607 "superblock": true, 00:14:24.607 "num_base_bdevs": 3, 00:14:24.607 "num_base_bdevs_discovered": 3, 00:14:24.607 "num_base_bdevs_operational": 3, 00:14:24.607 "process": { 00:14:24.607 "type": "rebuild", 00:14:24.607 "target": "spare", 00:14:24.607 "progress": { 00:14:24.607 "blocks": 20480, 00:14:24.607 "percent": 16 00:14:24.607 } 00:14:24.607 }, 00:14:24.607 "base_bdevs_list": [ 00:14:24.607 { 00:14:24.607 "name": "spare", 00:14:24.607 "uuid": "b771c45c-f8a0-542f-8b60-7c817294c9a6", 00:14:24.607 "is_configured": true, 00:14:24.607 "data_offset": 2048, 00:14:24.607 "data_size": 63488 00:14:24.607 }, 00:14:24.607 { 00:14:24.607 "name": "BaseBdev2", 00:14:24.607 "uuid": "76201b72-1214-599c-93b6-3591f60c1182", 00:14:24.607 "is_configured": true, 00:14:24.607 "data_offset": 2048, 00:14:24.607 "data_size": 63488 00:14:24.607 }, 00:14:24.607 { 00:14:24.607 "name": "BaseBdev3", 00:14:24.607 "uuid": "36f5484f-53ac-5db0-8add-faa9876fb54f", 00:14:24.607 "is_configured": true, 00:14:24.607 "data_offset": 2048, 00:14:24.607 "data_size": 63488 00:14:24.607 } 00:14:24.607 ] 00:14:24.607 }' 00:14:24.607 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.868 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:24.868 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.868 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.868 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:24.868 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:24.868 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:24.868 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:24.868 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:24.868 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=462 00:14:24.868 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:24.868 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.868 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.868 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.868 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.868 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.868 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.868 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.868 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.868 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.868 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.868 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.868 "name": "raid_bdev1", 00:14:24.868 "uuid": "c491b7ad-b0d6-4ff9-9645-4fa933350e39", 00:14:24.868 "strip_size_kb": 64, 00:14:24.868 "state": "online", 00:14:24.868 "raid_level": "raid5f", 00:14:24.868 "superblock": true, 00:14:24.868 "num_base_bdevs": 3, 00:14:24.868 "num_base_bdevs_discovered": 3, 00:14:24.868 "num_base_bdevs_operational": 3, 00:14:24.868 "process": { 00:14:24.868 "type": "rebuild", 00:14:24.868 "target": "spare", 00:14:24.868 "progress": { 00:14:24.868 "blocks": 22528, 00:14:24.868 "percent": 17 00:14:24.868 } 00:14:24.868 }, 00:14:24.868 "base_bdevs_list": [ 00:14:24.868 { 00:14:24.868 "name": "spare", 00:14:24.868 "uuid": "b771c45c-f8a0-542f-8b60-7c817294c9a6", 00:14:24.868 "is_configured": true, 00:14:24.868 "data_offset": 2048, 00:14:24.868 "data_size": 63488 00:14:24.868 }, 00:14:24.868 { 00:14:24.868 "name": "BaseBdev2", 00:14:24.868 "uuid": "76201b72-1214-599c-93b6-3591f60c1182", 00:14:24.868 "is_configured": true, 00:14:24.868 "data_offset": 2048, 00:14:24.868 "data_size": 63488 00:14:24.868 }, 00:14:24.868 { 00:14:24.868 "name": "BaseBdev3", 00:14:24.868 "uuid": "36f5484f-53ac-5db0-8add-faa9876fb54f", 00:14:24.868 "is_configured": true, 00:14:24.868 "data_offset": 2048, 00:14:24.868 "data_size": 63488 00:14:24.868 } 00:14:24.868 ] 00:14:24.868 }' 00:14:24.868 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.868 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:24.868 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.868 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.868 03:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:25.813 03:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:25.813 03:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.813 03:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.813 03:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.813 03:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.813 03:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.813 03:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.813 03:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.813 03:14:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.813 03:14:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.072 03:14:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.072 03:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.072 "name": "raid_bdev1", 00:14:26.073 "uuid": "c491b7ad-b0d6-4ff9-9645-4fa933350e39", 00:14:26.073 "strip_size_kb": 64, 00:14:26.073 "state": "online", 00:14:26.073 "raid_level": "raid5f", 00:14:26.073 "superblock": true, 00:14:26.073 "num_base_bdevs": 3, 00:14:26.073 "num_base_bdevs_discovered": 3, 00:14:26.073 "num_base_bdevs_operational": 3, 00:14:26.073 "process": { 00:14:26.073 "type": "rebuild", 00:14:26.073 "target": "spare", 00:14:26.073 "progress": { 00:14:26.073 "blocks": 45056, 00:14:26.073 "percent": 35 00:14:26.073 } 00:14:26.073 }, 00:14:26.073 "base_bdevs_list": [ 00:14:26.073 { 00:14:26.073 "name": "spare", 00:14:26.073 "uuid": "b771c45c-f8a0-542f-8b60-7c817294c9a6", 00:14:26.073 "is_configured": true, 00:14:26.073 "data_offset": 2048, 00:14:26.073 "data_size": 63488 00:14:26.073 }, 00:14:26.073 { 00:14:26.073 "name": "BaseBdev2", 00:14:26.073 "uuid": "76201b72-1214-599c-93b6-3591f60c1182", 00:14:26.073 "is_configured": true, 00:14:26.073 "data_offset": 2048, 00:14:26.073 "data_size": 63488 00:14:26.073 }, 00:14:26.073 { 00:14:26.073 "name": "BaseBdev3", 00:14:26.073 "uuid": "36f5484f-53ac-5db0-8add-faa9876fb54f", 00:14:26.073 "is_configured": true, 00:14:26.073 "data_offset": 2048, 00:14:26.073 "data_size": 63488 00:14:26.073 } 00:14:26.073 ] 00:14:26.073 }' 00:14:26.073 03:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.073 03:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.073 03:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.073 03:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.073 03:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:27.013 03:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:27.013 03:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.013 03:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.013 03:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.013 03:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.013 03:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.013 03:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.013 03:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.013 03:14:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.013 03:14:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.013 03:14:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.013 03:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.013 "name": "raid_bdev1", 00:14:27.013 "uuid": "c491b7ad-b0d6-4ff9-9645-4fa933350e39", 00:14:27.013 "strip_size_kb": 64, 00:14:27.013 "state": "online", 00:14:27.013 "raid_level": "raid5f", 00:14:27.013 "superblock": true, 00:14:27.013 "num_base_bdevs": 3, 00:14:27.013 "num_base_bdevs_discovered": 3, 00:14:27.013 "num_base_bdevs_operational": 3, 00:14:27.013 "process": { 00:14:27.013 "type": "rebuild", 00:14:27.013 "target": "spare", 00:14:27.013 "progress": { 00:14:27.013 "blocks": 69632, 00:14:27.013 "percent": 54 00:14:27.013 } 00:14:27.013 }, 00:14:27.013 "base_bdevs_list": [ 00:14:27.013 { 00:14:27.013 "name": "spare", 00:14:27.013 "uuid": "b771c45c-f8a0-542f-8b60-7c817294c9a6", 00:14:27.013 "is_configured": true, 00:14:27.013 "data_offset": 2048, 00:14:27.013 "data_size": 63488 00:14:27.013 }, 00:14:27.013 { 00:14:27.013 "name": "BaseBdev2", 00:14:27.013 "uuid": "76201b72-1214-599c-93b6-3591f60c1182", 00:14:27.013 "is_configured": true, 00:14:27.013 "data_offset": 2048, 00:14:27.013 "data_size": 63488 00:14:27.013 }, 00:14:27.013 { 00:14:27.013 "name": "BaseBdev3", 00:14:27.013 "uuid": "36f5484f-53ac-5db0-8add-faa9876fb54f", 00:14:27.013 "is_configured": true, 00:14:27.013 "data_offset": 2048, 00:14:27.013 "data_size": 63488 00:14:27.013 } 00:14:27.013 ] 00:14:27.013 }' 00:14:27.013 03:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.273 03:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.273 03:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.273 03:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.274 03:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:28.215 03:14:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:28.215 03:14:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.215 03:14:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.215 03:14:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.215 03:14:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.215 03:14:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.215 03:14:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.215 03:14:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.215 03:14:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.215 03:14:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.215 03:14:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.215 03:14:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.215 "name": "raid_bdev1", 00:14:28.215 "uuid": "c491b7ad-b0d6-4ff9-9645-4fa933350e39", 00:14:28.215 "strip_size_kb": 64, 00:14:28.215 "state": "online", 00:14:28.215 "raid_level": "raid5f", 00:14:28.215 "superblock": true, 00:14:28.215 "num_base_bdevs": 3, 00:14:28.215 "num_base_bdevs_discovered": 3, 00:14:28.215 "num_base_bdevs_operational": 3, 00:14:28.215 "process": { 00:14:28.215 "type": "rebuild", 00:14:28.215 "target": "spare", 00:14:28.215 "progress": { 00:14:28.215 "blocks": 92160, 00:14:28.215 "percent": 72 00:14:28.215 } 00:14:28.215 }, 00:14:28.215 "base_bdevs_list": [ 00:14:28.215 { 00:14:28.215 "name": "spare", 00:14:28.215 "uuid": "b771c45c-f8a0-542f-8b60-7c817294c9a6", 00:14:28.215 "is_configured": true, 00:14:28.215 "data_offset": 2048, 00:14:28.215 "data_size": 63488 00:14:28.215 }, 00:14:28.215 { 00:14:28.215 "name": "BaseBdev2", 00:14:28.215 "uuid": "76201b72-1214-599c-93b6-3591f60c1182", 00:14:28.215 "is_configured": true, 00:14:28.215 "data_offset": 2048, 00:14:28.215 "data_size": 63488 00:14:28.215 }, 00:14:28.215 { 00:14:28.215 "name": "BaseBdev3", 00:14:28.215 "uuid": "36f5484f-53ac-5db0-8add-faa9876fb54f", 00:14:28.215 "is_configured": true, 00:14:28.215 "data_offset": 2048, 00:14:28.215 "data_size": 63488 00:14:28.215 } 00:14:28.215 ] 00:14:28.215 }' 00:14:28.215 03:14:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.215 03:14:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.215 03:14:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.215 03:14:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.215 03:14:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:29.598 03:14:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:29.598 03:14:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.598 03:14:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.598 03:14:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.598 03:14:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.598 03:14:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.598 03:14:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.598 03:14:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.599 03:14:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.599 03:14:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.599 03:14:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.599 03:14:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.599 "name": "raid_bdev1", 00:14:29.599 "uuid": "c491b7ad-b0d6-4ff9-9645-4fa933350e39", 00:14:29.599 "strip_size_kb": 64, 00:14:29.599 "state": "online", 00:14:29.599 "raid_level": "raid5f", 00:14:29.599 "superblock": true, 00:14:29.599 "num_base_bdevs": 3, 00:14:29.599 "num_base_bdevs_discovered": 3, 00:14:29.599 "num_base_bdevs_operational": 3, 00:14:29.599 "process": { 00:14:29.599 "type": "rebuild", 00:14:29.599 "target": "spare", 00:14:29.599 "progress": { 00:14:29.599 "blocks": 114688, 00:14:29.599 "percent": 90 00:14:29.599 } 00:14:29.599 }, 00:14:29.599 "base_bdevs_list": [ 00:14:29.599 { 00:14:29.599 "name": "spare", 00:14:29.599 "uuid": "b771c45c-f8a0-542f-8b60-7c817294c9a6", 00:14:29.599 "is_configured": true, 00:14:29.599 "data_offset": 2048, 00:14:29.599 "data_size": 63488 00:14:29.599 }, 00:14:29.599 { 00:14:29.599 "name": "BaseBdev2", 00:14:29.599 "uuid": "76201b72-1214-599c-93b6-3591f60c1182", 00:14:29.599 "is_configured": true, 00:14:29.599 "data_offset": 2048, 00:14:29.599 "data_size": 63488 00:14:29.599 }, 00:14:29.599 { 00:14:29.599 "name": "BaseBdev3", 00:14:29.599 "uuid": "36f5484f-53ac-5db0-8add-faa9876fb54f", 00:14:29.599 "is_configured": true, 00:14:29.599 "data_offset": 2048, 00:14:29.599 "data_size": 63488 00:14:29.599 } 00:14:29.599 ] 00:14:29.599 }' 00:14:29.599 03:14:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.599 03:14:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.599 03:14:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.599 03:14:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.599 03:14:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:29.859 [2024-11-18 03:14:33.319409] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:29.859 [2024-11-18 03:14:33.319529] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:29.859 [2024-11-18 03:14:33.319664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.429 03:14:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:30.429 03:14:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.429 03:14:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.429 03:14:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.429 03:14:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.429 03:14:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.429 03:14:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.429 03:14:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.429 03:14:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.429 03:14:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.429 03:14:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.429 03:14:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.429 "name": "raid_bdev1", 00:14:30.429 "uuid": "c491b7ad-b0d6-4ff9-9645-4fa933350e39", 00:14:30.429 "strip_size_kb": 64, 00:14:30.429 "state": "online", 00:14:30.429 "raid_level": "raid5f", 00:14:30.429 "superblock": true, 00:14:30.429 "num_base_bdevs": 3, 00:14:30.429 "num_base_bdevs_discovered": 3, 00:14:30.429 "num_base_bdevs_operational": 3, 00:14:30.429 "base_bdevs_list": [ 00:14:30.429 { 00:14:30.429 "name": "spare", 00:14:30.429 "uuid": "b771c45c-f8a0-542f-8b60-7c817294c9a6", 00:14:30.429 "is_configured": true, 00:14:30.429 "data_offset": 2048, 00:14:30.429 "data_size": 63488 00:14:30.429 }, 00:14:30.429 { 00:14:30.429 "name": "BaseBdev2", 00:14:30.429 "uuid": "76201b72-1214-599c-93b6-3591f60c1182", 00:14:30.429 "is_configured": true, 00:14:30.429 "data_offset": 2048, 00:14:30.429 "data_size": 63488 00:14:30.429 }, 00:14:30.429 { 00:14:30.429 "name": "BaseBdev3", 00:14:30.429 "uuid": "36f5484f-53ac-5db0-8add-faa9876fb54f", 00:14:30.429 "is_configured": true, 00:14:30.429 "data_offset": 2048, 00:14:30.429 "data_size": 63488 00:14:30.429 } 00:14:30.429 ] 00:14:30.429 }' 00:14:30.429 03:14:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.690 "name": "raid_bdev1", 00:14:30.690 "uuid": "c491b7ad-b0d6-4ff9-9645-4fa933350e39", 00:14:30.690 "strip_size_kb": 64, 00:14:30.690 "state": "online", 00:14:30.690 "raid_level": "raid5f", 00:14:30.690 "superblock": true, 00:14:30.690 "num_base_bdevs": 3, 00:14:30.690 "num_base_bdevs_discovered": 3, 00:14:30.690 "num_base_bdevs_operational": 3, 00:14:30.690 "base_bdevs_list": [ 00:14:30.690 { 00:14:30.690 "name": "spare", 00:14:30.690 "uuid": "b771c45c-f8a0-542f-8b60-7c817294c9a6", 00:14:30.690 "is_configured": true, 00:14:30.690 "data_offset": 2048, 00:14:30.690 "data_size": 63488 00:14:30.690 }, 00:14:30.690 { 00:14:30.690 "name": "BaseBdev2", 00:14:30.690 "uuid": "76201b72-1214-599c-93b6-3591f60c1182", 00:14:30.690 "is_configured": true, 00:14:30.690 "data_offset": 2048, 00:14:30.690 "data_size": 63488 00:14:30.690 }, 00:14:30.690 { 00:14:30.690 "name": "BaseBdev3", 00:14:30.690 "uuid": "36f5484f-53ac-5db0-8add-faa9876fb54f", 00:14:30.690 "is_configured": true, 00:14:30.690 "data_offset": 2048, 00:14:30.690 "data_size": 63488 00:14:30.690 } 00:14:30.690 ] 00:14:30.690 }' 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.690 "name": "raid_bdev1", 00:14:30.690 "uuid": "c491b7ad-b0d6-4ff9-9645-4fa933350e39", 00:14:30.690 "strip_size_kb": 64, 00:14:30.690 "state": "online", 00:14:30.690 "raid_level": "raid5f", 00:14:30.690 "superblock": true, 00:14:30.690 "num_base_bdevs": 3, 00:14:30.690 "num_base_bdevs_discovered": 3, 00:14:30.690 "num_base_bdevs_operational": 3, 00:14:30.690 "base_bdevs_list": [ 00:14:30.690 { 00:14:30.690 "name": "spare", 00:14:30.690 "uuid": "b771c45c-f8a0-542f-8b60-7c817294c9a6", 00:14:30.690 "is_configured": true, 00:14:30.690 "data_offset": 2048, 00:14:30.690 "data_size": 63488 00:14:30.690 }, 00:14:30.690 { 00:14:30.690 "name": "BaseBdev2", 00:14:30.690 "uuid": "76201b72-1214-599c-93b6-3591f60c1182", 00:14:30.690 "is_configured": true, 00:14:30.690 "data_offset": 2048, 00:14:30.690 "data_size": 63488 00:14:30.690 }, 00:14:30.690 { 00:14:30.690 "name": "BaseBdev3", 00:14:30.690 "uuid": "36f5484f-53ac-5db0-8add-faa9876fb54f", 00:14:30.690 "is_configured": true, 00:14:30.690 "data_offset": 2048, 00:14:30.690 "data_size": 63488 00:14:30.690 } 00:14:30.690 ] 00:14:30.690 }' 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.690 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.261 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:31.261 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.261 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.261 [2024-11-18 03:14:34.622509] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.261 [2024-11-18 03:14:34.622592] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.261 [2024-11-18 03:14:34.622697] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.261 [2024-11-18 03:14:34.622817] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.261 [2024-11-18 03:14:34.622868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:31.261 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.261 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:31.261 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.261 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.261 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.261 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.261 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:31.261 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:31.261 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:31.261 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:31.261 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:31.261 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:31.261 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:31.261 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:31.261 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:31.261 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:31.261 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:31.261 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:31.261 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:31.521 /dev/nbd0 00:14:31.521 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:31.522 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:31.522 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:31.522 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:31.522 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:31.522 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:31.522 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:31.522 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:31.522 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:31.522 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:31.522 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:31.522 1+0 records in 00:14:31.522 1+0 records out 00:14:31.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459365 s, 8.9 MB/s 00:14:31.522 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.522 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:31.522 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.522 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:31.522 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:31.522 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:31.522 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:31.522 03:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:31.782 /dev/nbd1 00:14:31.782 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:31.782 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:31.782 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:31.782 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:31.782 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:31.782 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:31.782 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:31.782 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:31.782 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:31.782 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:31.782 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:31.782 1+0 records in 00:14:31.782 1+0 records out 00:14:31.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255504 s, 16.0 MB/s 00:14:31.782 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.782 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:31.782 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.782 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:31.782 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:31.782 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:31.782 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:31.782 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:31.782 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:31.782 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:31.782 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:31.782 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:31.782 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:31.782 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:31.782 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:32.043 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:32.043 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:32.043 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:32.043 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:32.043 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:32.043 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:32.043 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:32.043 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:32.043 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:32.043 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.304 [2024-11-18 03:14:35.689731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:32.304 [2024-11-18 03:14:35.689793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.304 [2024-11-18 03:14:35.689816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:32.304 [2024-11-18 03:14:35.689824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.304 [2024-11-18 03:14:35.692152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.304 [2024-11-18 03:14:35.692188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:32.304 [2024-11-18 03:14:35.692273] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:32.304 [2024-11-18 03:14:35.692311] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:32.304 [2024-11-18 03:14:35.692420] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:32.304 [2024-11-18 03:14:35.692507] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:32.304 spare 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.304 [2024-11-18 03:14:35.792396] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:14:32.304 [2024-11-18 03:14:35.792425] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:32.304 [2024-11-18 03:14:35.792694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047560 00:14:32.304 [2024-11-18 03:14:35.793137] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:14:32.304 [2024-11-18 03:14:35.793153] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:14:32.304 [2024-11-18 03:14:35.793307] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.304 "name": "raid_bdev1", 00:14:32.304 "uuid": "c491b7ad-b0d6-4ff9-9645-4fa933350e39", 00:14:32.304 "strip_size_kb": 64, 00:14:32.304 "state": "online", 00:14:32.304 "raid_level": "raid5f", 00:14:32.304 "superblock": true, 00:14:32.304 "num_base_bdevs": 3, 00:14:32.304 "num_base_bdevs_discovered": 3, 00:14:32.304 "num_base_bdevs_operational": 3, 00:14:32.304 "base_bdevs_list": [ 00:14:32.304 { 00:14:32.304 "name": "spare", 00:14:32.304 "uuid": "b771c45c-f8a0-542f-8b60-7c817294c9a6", 00:14:32.304 "is_configured": true, 00:14:32.304 "data_offset": 2048, 00:14:32.304 "data_size": 63488 00:14:32.304 }, 00:14:32.304 { 00:14:32.304 "name": "BaseBdev2", 00:14:32.304 "uuid": "76201b72-1214-599c-93b6-3591f60c1182", 00:14:32.304 "is_configured": true, 00:14:32.304 "data_offset": 2048, 00:14:32.304 "data_size": 63488 00:14:32.304 }, 00:14:32.304 { 00:14:32.304 "name": "BaseBdev3", 00:14:32.304 "uuid": "36f5484f-53ac-5db0-8add-faa9876fb54f", 00:14:32.304 "is_configured": true, 00:14:32.304 "data_offset": 2048, 00:14:32.304 "data_size": 63488 00:14:32.304 } 00:14:32.304 ] 00:14:32.304 }' 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.304 03:14:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.876 "name": "raid_bdev1", 00:14:32.876 "uuid": "c491b7ad-b0d6-4ff9-9645-4fa933350e39", 00:14:32.876 "strip_size_kb": 64, 00:14:32.876 "state": "online", 00:14:32.876 "raid_level": "raid5f", 00:14:32.876 "superblock": true, 00:14:32.876 "num_base_bdevs": 3, 00:14:32.876 "num_base_bdevs_discovered": 3, 00:14:32.876 "num_base_bdevs_operational": 3, 00:14:32.876 "base_bdevs_list": [ 00:14:32.876 { 00:14:32.876 "name": "spare", 00:14:32.876 "uuid": "b771c45c-f8a0-542f-8b60-7c817294c9a6", 00:14:32.876 "is_configured": true, 00:14:32.876 "data_offset": 2048, 00:14:32.876 "data_size": 63488 00:14:32.876 }, 00:14:32.876 { 00:14:32.876 "name": "BaseBdev2", 00:14:32.876 "uuid": "76201b72-1214-599c-93b6-3591f60c1182", 00:14:32.876 "is_configured": true, 00:14:32.876 "data_offset": 2048, 00:14:32.876 "data_size": 63488 00:14:32.876 }, 00:14:32.876 { 00:14:32.876 "name": "BaseBdev3", 00:14:32.876 "uuid": "36f5484f-53ac-5db0-8add-faa9876fb54f", 00:14:32.876 "is_configured": true, 00:14:32.876 "data_offset": 2048, 00:14:32.876 "data_size": 63488 00:14:32.876 } 00:14:32.876 ] 00:14:32.876 }' 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.876 [2024-11-18 03:14:36.341617] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.876 "name": "raid_bdev1", 00:14:32.876 "uuid": "c491b7ad-b0d6-4ff9-9645-4fa933350e39", 00:14:32.876 "strip_size_kb": 64, 00:14:32.876 "state": "online", 00:14:32.876 "raid_level": "raid5f", 00:14:32.876 "superblock": true, 00:14:32.876 "num_base_bdevs": 3, 00:14:32.876 "num_base_bdevs_discovered": 2, 00:14:32.876 "num_base_bdevs_operational": 2, 00:14:32.876 "base_bdevs_list": [ 00:14:32.876 { 00:14:32.876 "name": null, 00:14:32.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.876 "is_configured": false, 00:14:32.876 "data_offset": 0, 00:14:32.876 "data_size": 63488 00:14:32.876 }, 00:14:32.876 { 00:14:32.876 "name": "BaseBdev2", 00:14:32.876 "uuid": "76201b72-1214-599c-93b6-3591f60c1182", 00:14:32.876 "is_configured": true, 00:14:32.876 "data_offset": 2048, 00:14:32.876 "data_size": 63488 00:14:32.876 }, 00:14:32.876 { 00:14:32.876 "name": "BaseBdev3", 00:14:32.876 "uuid": "36f5484f-53ac-5db0-8add-faa9876fb54f", 00:14:32.876 "is_configured": true, 00:14:32.876 "data_offset": 2048, 00:14:32.876 "data_size": 63488 00:14:32.876 } 00:14:32.876 ] 00:14:32.876 }' 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.876 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.448 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:33.448 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.448 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.448 [2024-11-18 03:14:36.788941] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:33.448 [2024-11-18 03:14:36.789141] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:33.448 [2024-11-18 03:14:36.789155] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:33.448 [2024-11-18 03:14:36.789209] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:33.448 [2024-11-18 03:14:36.792868] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047630 00:14:33.448 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.448 03:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:33.448 [2024-11-18 03:14:36.795029] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:34.388 03:14:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:34.388 03:14:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.388 03:14:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:34.388 03:14:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:34.388 03:14:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.388 03:14:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.388 03:14:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.388 03:14:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.388 03:14:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.388 03:14:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.388 03:14:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.388 "name": "raid_bdev1", 00:14:34.388 "uuid": "c491b7ad-b0d6-4ff9-9645-4fa933350e39", 00:14:34.388 "strip_size_kb": 64, 00:14:34.388 "state": "online", 00:14:34.388 "raid_level": "raid5f", 00:14:34.388 "superblock": true, 00:14:34.388 "num_base_bdevs": 3, 00:14:34.388 "num_base_bdevs_discovered": 3, 00:14:34.388 "num_base_bdevs_operational": 3, 00:14:34.388 "process": { 00:14:34.388 "type": "rebuild", 00:14:34.388 "target": "spare", 00:14:34.388 "progress": { 00:14:34.388 "blocks": 20480, 00:14:34.388 "percent": 16 00:14:34.388 } 00:14:34.388 }, 00:14:34.388 "base_bdevs_list": [ 00:14:34.388 { 00:14:34.388 "name": "spare", 00:14:34.388 "uuid": "b771c45c-f8a0-542f-8b60-7c817294c9a6", 00:14:34.388 "is_configured": true, 00:14:34.388 "data_offset": 2048, 00:14:34.388 "data_size": 63488 00:14:34.388 }, 00:14:34.388 { 00:14:34.388 "name": "BaseBdev2", 00:14:34.388 "uuid": "76201b72-1214-599c-93b6-3591f60c1182", 00:14:34.388 "is_configured": true, 00:14:34.388 "data_offset": 2048, 00:14:34.388 "data_size": 63488 00:14:34.388 }, 00:14:34.388 { 00:14:34.388 "name": "BaseBdev3", 00:14:34.388 "uuid": "36f5484f-53ac-5db0-8add-faa9876fb54f", 00:14:34.388 "is_configured": true, 00:14:34.389 "data_offset": 2048, 00:14:34.389 "data_size": 63488 00:14:34.389 } 00:14:34.389 ] 00:14:34.389 }' 00:14:34.389 03:14:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.389 03:14:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:34.389 03:14:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.389 03:14:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:34.389 03:14:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:34.389 03:14:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.389 03:14:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.389 [2024-11-18 03:14:37.935657] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:34.649 [2024-11-18 03:14:38.003211] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:34.649 [2024-11-18 03:14:38.003272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.649 [2024-11-18 03:14:38.003291] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:34.649 [2024-11-18 03:14:38.003298] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:34.649 03:14:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.649 03:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:34.649 03:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.649 03:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.649 03:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.649 03:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.649 03:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:34.649 03:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.649 03:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.649 03:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.649 03:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.649 03:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.649 03:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.649 03:14:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.649 03:14:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.649 03:14:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.649 03:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.649 "name": "raid_bdev1", 00:14:34.649 "uuid": "c491b7ad-b0d6-4ff9-9645-4fa933350e39", 00:14:34.649 "strip_size_kb": 64, 00:14:34.649 "state": "online", 00:14:34.649 "raid_level": "raid5f", 00:14:34.649 "superblock": true, 00:14:34.649 "num_base_bdevs": 3, 00:14:34.649 "num_base_bdevs_discovered": 2, 00:14:34.649 "num_base_bdevs_operational": 2, 00:14:34.649 "base_bdevs_list": [ 00:14:34.649 { 00:14:34.649 "name": null, 00:14:34.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.649 "is_configured": false, 00:14:34.649 "data_offset": 0, 00:14:34.649 "data_size": 63488 00:14:34.649 }, 00:14:34.649 { 00:14:34.649 "name": "BaseBdev2", 00:14:34.649 "uuid": "76201b72-1214-599c-93b6-3591f60c1182", 00:14:34.649 "is_configured": true, 00:14:34.649 "data_offset": 2048, 00:14:34.649 "data_size": 63488 00:14:34.649 }, 00:14:34.649 { 00:14:34.649 "name": "BaseBdev3", 00:14:34.649 "uuid": "36f5484f-53ac-5db0-8add-faa9876fb54f", 00:14:34.649 "is_configured": true, 00:14:34.649 "data_offset": 2048, 00:14:34.649 "data_size": 63488 00:14:34.649 } 00:14:34.649 ] 00:14:34.649 }' 00:14:34.649 03:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.649 03:14:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.910 03:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:34.910 03:14:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.910 03:14:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.910 [2024-11-18 03:14:38.459654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:34.910 [2024-11-18 03:14:38.459768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.910 [2024-11-18 03:14:38.459809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:14:34.910 [2024-11-18 03:14:38.459838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.910 [2024-11-18 03:14:38.460301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.910 [2024-11-18 03:14:38.460322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:34.910 [2024-11-18 03:14:38.460408] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:34.910 [2024-11-18 03:14:38.460421] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:34.910 [2024-11-18 03:14:38.460433] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:34.910 [2024-11-18 03:14:38.460466] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:34.910 [2024-11-18 03:14:38.464021] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:14:34.910 spare 00:14:34.910 03:14:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.910 03:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:34.910 [2024-11-18 03:14:38.466180] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:36.292 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:36.292 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.292 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:36.292 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:36.292 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.292 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.292 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.292 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.292 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.292 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.292 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.292 "name": "raid_bdev1", 00:14:36.292 "uuid": "c491b7ad-b0d6-4ff9-9645-4fa933350e39", 00:14:36.292 "strip_size_kb": 64, 00:14:36.292 "state": "online", 00:14:36.292 "raid_level": "raid5f", 00:14:36.293 "superblock": true, 00:14:36.293 "num_base_bdevs": 3, 00:14:36.293 "num_base_bdevs_discovered": 3, 00:14:36.293 "num_base_bdevs_operational": 3, 00:14:36.293 "process": { 00:14:36.293 "type": "rebuild", 00:14:36.293 "target": "spare", 00:14:36.293 "progress": { 00:14:36.293 "blocks": 20480, 00:14:36.293 "percent": 16 00:14:36.293 } 00:14:36.293 }, 00:14:36.293 "base_bdevs_list": [ 00:14:36.293 { 00:14:36.293 "name": "spare", 00:14:36.293 "uuid": "b771c45c-f8a0-542f-8b60-7c817294c9a6", 00:14:36.293 "is_configured": true, 00:14:36.293 "data_offset": 2048, 00:14:36.293 "data_size": 63488 00:14:36.293 }, 00:14:36.293 { 00:14:36.293 "name": "BaseBdev2", 00:14:36.293 "uuid": "76201b72-1214-599c-93b6-3591f60c1182", 00:14:36.293 "is_configured": true, 00:14:36.293 "data_offset": 2048, 00:14:36.293 "data_size": 63488 00:14:36.293 }, 00:14:36.293 { 00:14:36.293 "name": "BaseBdev3", 00:14:36.293 "uuid": "36f5484f-53ac-5db0-8add-faa9876fb54f", 00:14:36.293 "is_configured": true, 00:14:36.293 "data_offset": 2048, 00:14:36.293 "data_size": 63488 00:14:36.293 } 00:14:36.293 ] 00:14:36.293 }' 00:14:36.293 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.293 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:36.293 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.293 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:36.293 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:36.293 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.293 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.293 [2024-11-18 03:14:39.631115] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:36.293 [2024-11-18 03:14:39.673975] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:36.293 [2024-11-18 03:14:39.674123] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.293 [2024-11-18 03:14:39.674168] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:36.293 [2024-11-18 03:14:39.674199] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:36.293 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.293 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:36.293 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.293 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.293 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.293 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.293 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:36.293 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.293 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.293 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.293 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.293 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.293 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.293 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.293 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.293 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.293 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.293 "name": "raid_bdev1", 00:14:36.293 "uuid": "c491b7ad-b0d6-4ff9-9645-4fa933350e39", 00:14:36.293 "strip_size_kb": 64, 00:14:36.293 "state": "online", 00:14:36.293 "raid_level": "raid5f", 00:14:36.293 "superblock": true, 00:14:36.293 "num_base_bdevs": 3, 00:14:36.293 "num_base_bdevs_discovered": 2, 00:14:36.293 "num_base_bdevs_operational": 2, 00:14:36.293 "base_bdevs_list": [ 00:14:36.293 { 00:14:36.293 "name": null, 00:14:36.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.293 "is_configured": false, 00:14:36.293 "data_offset": 0, 00:14:36.293 "data_size": 63488 00:14:36.293 }, 00:14:36.293 { 00:14:36.293 "name": "BaseBdev2", 00:14:36.293 "uuid": "76201b72-1214-599c-93b6-3591f60c1182", 00:14:36.293 "is_configured": true, 00:14:36.293 "data_offset": 2048, 00:14:36.293 "data_size": 63488 00:14:36.293 }, 00:14:36.293 { 00:14:36.293 "name": "BaseBdev3", 00:14:36.293 "uuid": "36f5484f-53ac-5db0-8add-faa9876fb54f", 00:14:36.293 "is_configured": true, 00:14:36.293 "data_offset": 2048, 00:14:36.293 "data_size": 63488 00:14:36.293 } 00:14:36.293 ] 00:14:36.293 }' 00:14:36.293 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.293 03:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.554 03:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:36.554 03:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.554 03:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:36.554 03:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:36.554 03:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.554 03:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.554 03:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.554 03:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.554 03:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.814 03:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.814 03:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.814 "name": "raid_bdev1", 00:14:36.814 "uuid": "c491b7ad-b0d6-4ff9-9645-4fa933350e39", 00:14:36.814 "strip_size_kb": 64, 00:14:36.814 "state": "online", 00:14:36.814 "raid_level": "raid5f", 00:14:36.814 "superblock": true, 00:14:36.814 "num_base_bdevs": 3, 00:14:36.814 "num_base_bdevs_discovered": 2, 00:14:36.814 "num_base_bdevs_operational": 2, 00:14:36.814 "base_bdevs_list": [ 00:14:36.814 { 00:14:36.814 "name": null, 00:14:36.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.814 "is_configured": false, 00:14:36.814 "data_offset": 0, 00:14:36.814 "data_size": 63488 00:14:36.814 }, 00:14:36.814 { 00:14:36.814 "name": "BaseBdev2", 00:14:36.814 "uuid": "76201b72-1214-599c-93b6-3591f60c1182", 00:14:36.814 "is_configured": true, 00:14:36.814 "data_offset": 2048, 00:14:36.814 "data_size": 63488 00:14:36.814 }, 00:14:36.814 { 00:14:36.814 "name": "BaseBdev3", 00:14:36.814 "uuid": "36f5484f-53ac-5db0-8add-faa9876fb54f", 00:14:36.814 "is_configured": true, 00:14:36.814 "data_offset": 2048, 00:14:36.814 "data_size": 63488 00:14:36.814 } 00:14:36.814 ] 00:14:36.814 }' 00:14:36.814 03:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.814 03:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:36.814 03:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.814 03:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:36.814 03:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:36.814 03:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.814 03:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.814 03:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.814 03:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:36.814 03:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.814 03:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.814 [2024-11-18 03:14:40.270490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:36.814 [2024-11-18 03:14:40.270611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.814 [2024-11-18 03:14:40.270639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:36.814 [2024-11-18 03:14:40.270650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.814 [2024-11-18 03:14:40.271065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.814 [2024-11-18 03:14:40.271087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:36.814 [2024-11-18 03:14:40.271158] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:36.814 [2024-11-18 03:14:40.271175] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:36.814 [2024-11-18 03:14:40.271184] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:36.814 [2024-11-18 03:14:40.271196] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:36.814 BaseBdev1 00:14:36.814 03:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.814 03:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:37.754 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:37.754 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.754 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.754 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.754 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.754 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:37.754 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.754 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.754 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.754 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.754 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.754 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.754 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.754 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.754 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.013 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.013 "name": "raid_bdev1", 00:14:38.013 "uuid": "c491b7ad-b0d6-4ff9-9645-4fa933350e39", 00:14:38.013 "strip_size_kb": 64, 00:14:38.013 "state": "online", 00:14:38.013 "raid_level": "raid5f", 00:14:38.013 "superblock": true, 00:14:38.013 "num_base_bdevs": 3, 00:14:38.013 "num_base_bdevs_discovered": 2, 00:14:38.013 "num_base_bdevs_operational": 2, 00:14:38.013 "base_bdevs_list": [ 00:14:38.013 { 00:14:38.013 "name": null, 00:14:38.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.013 "is_configured": false, 00:14:38.013 "data_offset": 0, 00:14:38.013 "data_size": 63488 00:14:38.013 }, 00:14:38.013 { 00:14:38.013 "name": "BaseBdev2", 00:14:38.013 "uuid": "76201b72-1214-599c-93b6-3591f60c1182", 00:14:38.013 "is_configured": true, 00:14:38.013 "data_offset": 2048, 00:14:38.013 "data_size": 63488 00:14:38.013 }, 00:14:38.013 { 00:14:38.013 "name": "BaseBdev3", 00:14:38.013 "uuid": "36f5484f-53ac-5db0-8add-faa9876fb54f", 00:14:38.013 "is_configured": true, 00:14:38.013 "data_offset": 2048, 00:14:38.013 "data_size": 63488 00:14:38.013 } 00:14:38.013 ] 00:14:38.013 }' 00:14:38.013 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.013 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.273 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:38.273 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.273 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:38.273 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:38.273 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.273 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.273 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.273 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.273 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.273 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.273 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.273 "name": "raid_bdev1", 00:14:38.273 "uuid": "c491b7ad-b0d6-4ff9-9645-4fa933350e39", 00:14:38.273 "strip_size_kb": 64, 00:14:38.273 "state": "online", 00:14:38.273 "raid_level": "raid5f", 00:14:38.273 "superblock": true, 00:14:38.273 "num_base_bdevs": 3, 00:14:38.273 "num_base_bdevs_discovered": 2, 00:14:38.273 "num_base_bdevs_operational": 2, 00:14:38.273 "base_bdevs_list": [ 00:14:38.273 { 00:14:38.273 "name": null, 00:14:38.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.273 "is_configured": false, 00:14:38.273 "data_offset": 0, 00:14:38.273 "data_size": 63488 00:14:38.273 }, 00:14:38.273 { 00:14:38.273 "name": "BaseBdev2", 00:14:38.273 "uuid": "76201b72-1214-599c-93b6-3591f60c1182", 00:14:38.273 "is_configured": true, 00:14:38.273 "data_offset": 2048, 00:14:38.273 "data_size": 63488 00:14:38.273 }, 00:14:38.273 { 00:14:38.273 "name": "BaseBdev3", 00:14:38.273 "uuid": "36f5484f-53ac-5db0-8add-faa9876fb54f", 00:14:38.273 "is_configured": true, 00:14:38.273 "data_offset": 2048, 00:14:38.273 "data_size": 63488 00:14:38.273 } 00:14:38.273 ] 00:14:38.273 }' 00:14:38.273 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.273 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:38.273 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.534 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:38.534 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:38.534 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:38.534 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:38.534 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:38.534 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:38.534 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:38.534 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:38.534 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:38.534 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.534 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.534 [2024-11-18 03:14:41.871799] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:38.534 [2024-11-18 03:14:41.872021] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:38.534 [2024-11-18 03:14:41.872085] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:38.534 request: 00:14:38.534 { 00:14:38.534 "base_bdev": "BaseBdev1", 00:14:38.534 "raid_bdev": "raid_bdev1", 00:14:38.534 "method": "bdev_raid_add_base_bdev", 00:14:38.534 "req_id": 1 00:14:38.534 } 00:14:38.534 Got JSON-RPC error response 00:14:38.534 response: 00:14:38.534 { 00:14:38.534 "code": -22, 00:14:38.534 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:38.534 } 00:14:38.534 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:38.534 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:38.534 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:38.534 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:38.534 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:38.534 03:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:39.474 03:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:39.474 03:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.474 03:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.474 03:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.474 03:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.474 03:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:39.474 03:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.474 03:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.474 03:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.474 03:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.474 03:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.474 03:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.474 03:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.474 03:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.474 03:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.474 03:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.474 "name": "raid_bdev1", 00:14:39.474 "uuid": "c491b7ad-b0d6-4ff9-9645-4fa933350e39", 00:14:39.474 "strip_size_kb": 64, 00:14:39.474 "state": "online", 00:14:39.474 "raid_level": "raid5f", 00:14:39.474 "superblock": true, 00:14:39.474 "num_base_bdevs": 3, 00:14:39.474 "num_base_bdevs_discovered": 2, 00:14:39.474 "num_base_bdevs_operational": 2, 00:14:39.474 "base_bdevs_list": [ 00:14:39.474 { 00:14:39.474 "name": null, 00:14:39.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.474 "is_configured": false, 00:14:39.474 "data_offset": 0, 00:14:39.474 "data_size": 63488 00:14:39.474 }, 00:14:39.474 { 00:14:39.474 "name": "BaseBdev2", 00:14:39.474 "uuid": "76201b72-1214-599c-93b6-3591f60c1182", 00:14:39.474 "is_configured": true, 00:14:39.474 "data_offset": 2048, 00:14:39.474 "data_size": 63488 00:14:39.474 }, 00:14:39.474 { 00:14:39.474 "name": "BaseBdev3", 00:14:39.474 "uuid": "36f5484f-53ac-5db0-8add-faa9876fb54f", 00:14:39.474 "is_configured": true, 00:14:39.474 "data_offset": 2048, 00:14:39.474 "data_size": 63488 00:14:39.474 } 00:14:39.474 ] 00:14:39.474 }' 00:14:39.474 03:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.474 03:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.808 03:14:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:39.808 03:14:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.808 03:14:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:39.808 03:14:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:39.808 03:14:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.808 03:14:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.808 03:14:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.808 03:14:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.808 03:14:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.808 03:14:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.808 03:14:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.808 "name": "raid_bdev1", 00:14:39.808 "uuid": "c491b7ad-b0d6-4ff9-9645-4fa933350e39", 00:14:39.808 "strip_size_kb": 64, 00:14:39.808 "state": "online", 00:14:39.808 "raid_level": "raid5f", 00:14:39.808 "superblock": true, 00:14:39.808 "num_base_bdevs": 3, 00:14:39.808 "num_base_bdevs_discovered": 2, 00:14:39.808 "num_base_bdevs_operational": 2, 00:14:39.808 "base_bdevs_list": [ 00:14:39.808 { 00:14:39.808 "name": null, 00:14:39.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.808 "is_configured": false, 00:14:39.808 "data_offset": 0, 00:14:39.808 "data_size": 63488 00:14:39.808 }, 00:14:39.808 { 00:14:39.808 "name": "BaseBdev2", 00:14:39.808 "uuid": "76201b72-1214-599c-93b6-3591f60c1182", 00:14:39.808 "is_configured": true, 00:14:39.808 "data_offset": 2048, 00:14:39.808 "data_size": 63488 00:14:39.808 }, 00:14:39.808 { 00:14:39.808 "name": "BaseBdev3", 00:14:39.808 "uuid": "36f5484f-53ac-5db0-8add-faa9876fb54f", 00:14:39.808 "is_configured": true, 00:14:39.808 "data_offset": 2048, 00:14:39.808 "data_size": 63488 00:14:39.808 } 00:14:39.808 ] 00:14:39.808 }' 00:14:39.808 03:14:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.808 03:14:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:39.808 03:14:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.068 03:14:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:40.068 03:14:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 92643 00:14:40.068 03:14:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 92643 ']' 00:14:40.068 03:14:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 92643 00:14:40.068 03:14:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:40.068 03:14:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:40.068 03:14:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92643 00:14:40.068 killing process with pid 92643 00:14:40.068 Received shutdown signal, test time was about 60.000000 seconds 00:14:40.068 00:14:40.068 Latency(us) 00:14:40.068 [2024-11-18T03:14:43.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.068 [2024-11-18T03:14:43.645Z] =================================================================================================================== 00:14:40.068 [2024-11-18T03:14:43.645Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:40.068 03:14:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:40.068 03:14:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:40.068 03:14:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92643' 00:14:40.068 03:14:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 92643 00:14:40.068 [2024-11-18 03:14:43.469655] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:40.068 [2024-11-18 03:14:43.469775] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:40.068 03:14:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 92643 00:14:40.068 [2024-11-18 03:14:43.469843] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:40.068 [2024-11-18 03:14:43.469853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:14:40.068 [2024-11-18 03:14:43.511577] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:40.328 03:14:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:40.328 00:14:40.328 real 0m21.229s 00:14:40.328 user 0m27.533s 00:14:40.328 sys 0m2.612s 00:14:40.328 03:14:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:40.328 ************************************ 00:14:40.328 END TEST raid5f_rebuild_test_sb 00:14:40.328 ************************************ 00:14:40.328 03:14:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.328 03:14:43 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:40.328 03:14:43 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:14:40.328 03:14:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:40.328 03:14:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:40.328 03:14:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:40.328 ************************************ 00:14:40.328 START TEST raid5f_state_function_test 00:14:40.328 ************************************ 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:40.328 Process raid pid: 93374 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=93374 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93374' 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 93374 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 93374 ']' 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:40.328 03:14:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.587 [2024-11-18 03:14:43.940518] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:40.587 [2024-11-18 03:14:43.940823] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.587 [2024-11-18 03:14:44.093277] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.587 [2024-11-18 03:14:44.146528] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.847 [2024-11-18 03:14:44.188774] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.847 [2024-11-18 03:14:44.188812] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.416 03:14:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:41.416 03:14:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:14:41.416 03:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:41.416 03:14:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.416 03:14:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.416 [2024-11-18 03:14:44.830206] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:41.416 [2024-11-18 03:14:44.830310] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:41.416 [2024-11-18 03:14:44.830352] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:41.416 [2024-11-18 03:14:44.830376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:41.416 [2024-11-18 03:14:44.830394] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:41.416 [2024-11-18 03:14:44.830419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:41.416 [2024-11-18 03:14:44.830437] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:41.416 [2024-11-18 03:14:44.830457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:41.416 03:14:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.416 03:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:41.416 03:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.416 03:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.416 03:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.416 03:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.416 03:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:41.416 03:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.416 03:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.416 03:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.416 03:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.416 03:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.416 03:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.416 03:14:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.416 03:14:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.416 03:14:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.416 03:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.416 "name": "Existed_Raid", 00:14:41.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.416 "strip_size_kb": 64, 00:14:41.417 "state": "configuring", 00:14:41.417 "raid_level": "raid5f", 00:14:41.417 "superblock": false, 00:14:41.417 "num_base_bdevs": 4, 00:14:41.417 "num_base_bdevs_discovered": 0, 00:14:41.417 "num_base_bdevs_operational": 4, 00:14:41.417 "base_bdevs_list": [ 00:14:41.417 { 00:14:41.417 "name": "BaseBdev1", 00:14:41.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.417 "is_configured": false, 00:14:41.417 "data_offset": 0, 00:14:41.417 "data_size": 0 00:14:41.417 }, 00:14:41.417 { 00:14:41.417 "name": "BaseBdev2", 00:14:41.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.417 "is_configured": false, 00:14:41.417 "data_offset": 0, 00:14:41.417 "data_size": 0 00:14:41.417 }, 00:14:41.417 { 00:14:41.417 "name": "BaseBdev3", 00:14:41.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.417 "is_configured": false, 00:14:41.417 "data_offset": 0, 00:14:41.417 "data_size": 0 00:14:41.417 }, 00:14:41.417 { 00:14:41.417 "name": "BaseBdev4", 00:14:41.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.417 "is_configured": false, 00:14:41.417 "data_offset": 0, 00:14:41.417 "data_size": 0 00:14:41.417 } 00:14:41.417 ] 00:14:41.417 }' 00:14:41.417 03:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.417 03:14:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.987 [2024-11-18 03:14:45.281314] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:41.987 [2024-11-18 03:14:45.281398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.987 [2024-11-18 03:14:45.293333] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:41.987 [2024-11-18 03:14:45.293410] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:41.987 [2024-11-18 03:14:45.293437] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:41.987 [2024-11-18 03:14:45.293459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:41.987 [2024-11-18 03:14:45.293477] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:41.987 [2024-11-18 03:14:45.293497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:41.987 [2024-11-18 03:14:45.293514] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:41.987 [2024-11-18 03:14:45.293533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.987 [2024-11-18 03:14:45.314228] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:41.987 BaseBdev1 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.987 [ 00:14:41.987 { 00:14:41.987 "name": "BaseBdev1", 00:14:41.987 "aliases": [ 00:14:41.987 "634cb485-288c-450c-856d-b5fba7abeef3" 00:14:41.987 ], 00:14:41.987 "product_name": "Malloc disk", 00:14:41.987 "block_size": 512, 00:14:41.987 "num_blocks": 65536, 00:14:41.987 "uuid": "634cb485-288c-450c-856d-b5fba7abeef3", 00:14:41.987 "assigned_rate_limits": { 00:14:41.987 "rw_ios_per_sec": 0, 00:14:41.987 "rw_mbytes_per_sec": 0, 00:14:41.987 "r_mbytes_per_sec": 0, 00:14:41.987 "w_mbytes_per_sec": 0 00:14:41.987 }, 00:14:41.987 "claimed": true, 00:14:41.987 "claim_type": "exclusive_write", 00:14:41.987 "zoned": false, 00:14:41.987 "supported_io_types": { 00:14:41.987 "read": true, 00:14:41.987 "write": true, 00:14:41.987 "unmap": true, 00:14:41.987 "flush": true, 00:14:41.987 "reset": true, 00:14:41.987 "nvme_admin": false, 00:14:41.987 "nvme_io": false, 00:14:41.987 "nvme_io_md": false, 00:14:41.987 "write_zeroes": true, 00:14:41.987 "zcopy": true, 00:14:41.987 "get_zone_info": false, 00:14:41.987 "zone_management": false, 00:14:41.987 "zone_append": false, 00:14:41.987 "compare": false, 00:14:41.987 "compare_and_write": false, 00:14:41.987 "abort": true, 00:14:41.987 "seek_hole": false, 00:14:41.987 "seek_data": false, 00:14:41.987 "copy": true, 00:14:41.987 "nvme_iov_md": false 00:14:41.987 }, 00:14:41.987 "memory_domains": [ 00:14:41.987 { 00:14:41.987 "dma_device_id": "system", 00:14:41.987 "dma_device_type": 1 00:14:41.987 }, 00:14:41.987 { 00:14:41.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.987 "dma_device_type": 2 00:14:41.987 } 00:14:41.987 ], 00:14:41.987 "driver_specific": {} 00:14:41.987 } 00:14:41.987 ] 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.987 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.988 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.988 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.988 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.988 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.988 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.988 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.988 "name": "Existed_Raid", 00:14:41.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.988 "strip_size_kb": 64, 00:14:41.988 "state": "configuring", 00:14:41.988 "raid_level": "raid5f", 00:14:41.988 "superblock": false, 00:14:41.988 "num_base_bdevs": 4, 00:14:41.988 "num_base_bdevs_discovered": 1, 00:14:41.988 "num_base_bdevs_operational": 4, 00:14:41.988 "base_bdevs_list": [ 00:14:41.988 { 00:14:41.988 "name": "BaseBdev1", 00:14:41.988 "uuid": "634cb485-288c-450c-856d-b5fba7abeef3", 00:14:41.988 "is_configured": true, 00:14:41.988 "data_offset": 0, 00:14:41.988 "data_size": 65536 00:14:41.988 }, 00:14:41.988 { 00:14:41.988 "name": "BaseBdev2", 00:14:41.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.988 "is_configured": false, 00:14:41.988 "data_offset": 0, 00:14:41.988 "data_size": 0 00:14:41.988 }, 00:14:41.988 { 00:14:41.988 "name": "BaseBdev3", 00:14:41.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.988 "is_configured": false, 00:14:41.988 "data_offset": 0, 00:14:41.988 "data_size": 0 00:14:41.988 }, 00:14:41.988 { 00:14:41.988 "name": "BaseBdev4", 00:14:41.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.988 "is_configured": false, 00:14:41.988 "data_offset": 0, 00:14:41.988 "data_size": 0 00:14:41.988 } 00:14:41.988 ] 00:14:41.988 }' 00:14:41.988 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.988 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.248 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:42.248 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.248 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.248 [2024-11-18 03:14:45.817443] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:42.248 [2024-11-18 03:14:45.817544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:14:42.248 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.248 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:42.248 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.248 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.508 [2024-11-18 03:14:45.825470] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:42.508 [2024-11-18 03:14:45.827424] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:42.508 [2024-11-18 03:14:45.827505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:42.508 [2024-11-18 03:14:45.827542] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:42.508 [2024-11-18 03:14:45.827567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:42.508 [2024-11-18 03:14:45.827609] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:42.508 [2024-11-18 03:14:45.827632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:42.508 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.508 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:42.508 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:42.508 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:42.508 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.508 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.508 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.508 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.508 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:42.508 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.508 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.508 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.508 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.508 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.509 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.509 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.509 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.509 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.509 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.509 "name": "Existed_Raid", 00:14:42.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.509 "strip_size_kb": 64, 00:14:42.509 "state": "configuring", 00:14:42.509 "raid_level": "raid5f", 00:14:42.509 "superblock": false, 00:14:42.509 "num_base_bdevs": 4, 00:14:42.509 "num_base_bdevs_discovered": 1, 00:14:42.509 "num_base_bdevs_operational": 4, 00:14:42.509 "base_bdevs_list": [ 00:14:42.509 { 00:14:42.509 "name": "BaseBdev1", 00:14:42.509 "uuid": "634cb485-288c-450c-856d-b5fba7abeef3", 00:14:42.509 "is_configured": true, 00:14:42.509 "data_offset": 0, 00:14:42.509 "data_size": 65536 00:14:42.509 }, 00:14:42.509 { 00:14:42.509 "name": "BaseBdev2", 00:14:42.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.509 "is_configured": false, 00:14:42.509 "data_offset": 0, 00:14:42.509 "data_size": 0 00:14:42.509 }, 00:14:42.509 { 00:14:42.509 "name": "BaseBdev3", 00:14:42.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.509 "is_configured": false, 00:14:42.509 "data_offset": 0, 00:14:42.509 "data_size": 0 00:14:42.509 }, 00:14:42.509 { 00:14:42.509 "name": "BaseBdev4", 00:14:42.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.509 "is_configured": false, 00:14:42.509 "data_offset": 0, 00:14:42.509 "data_size": 0 00:14:42.509 } 00:14:42.509 ] 00:14:42.509 }' 00:14:42.509 03:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.509 03:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.769 [2024-11-18 03:14:46.299476] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:42.769 BaseBdev2 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.769 [ 00:14:42.769 { 00:14:42.769 "name": "BaseBdev2", 00:14:42.769 "aliases": [ 00:14:42.769 "fb29ad4e-dfe4-4887-9c40-ca87e98453e6" 00:14:42.769 ], 00:14:42.769 "product_name": "Malloc disk", 00:14:42.769 "block_size": 512, 00:14:42.769 "num_blocks": 65536, 00:14:42.769 "uuid": "fb29ad4e-dfe4-4887-9c40-ca87e98453e6", 00:14:42.769 "assigned_rate_limits": { 00:14:42.769 "rw_ios_per_sec": 0, 00:14:42.769 "rw_mbytes_per_sec": 0, 00:14:42.769 "r_mbytes_per_sec": 0, 00:14:42.769 "w_mbytes_per_sec": 0 00:14:42.769 }, 00:14:42.769 "claimed": true, 00:14:42.769 "claim_type": "exclusive_write", 00:14:42.769 "zoned": false, 00:14:42.769 "supported_io_types": { 00:14:42.769 "read": true, 00:14:42.769 "write": true, 00:14:42.769 "unmap": true, 00:14:42.769 "flush": true, 00:14:42.769 "reset": true, 00:14:42.769 "nvme_admin": false, 00:14:42.769 "nvme_io": false, 00:14:42.769 "nvme_io_md": false, 00:14:42.769 "write_zeroes": true, 00:14:42.769 "zcopy": true, 00:14:42.769 "get_zone_info": false, 00:14:42.769 "zone_management": false, 00:14:42.769 "zone_append": false, 00:14:42.769 "compare": false, 00:14:42.769 "compare_and_write": false, 00:14:42.769 "abort": true, 00:14:42.769 "seek_hole": false, 00:14:42.769 "seek_data": false, 00:14:42.769 "copy": true, 00:14:42.769 "nvme_iov_md": false 00:14:42.769 }, 00:14:42.769 "memory_domains": [ 00:14:42.769 { 00:14:42.769 "dma_device_id": "system", 00:14:42.769 "dma_device_type": 1 00:14:42.769 }, 00:14:42.769 { 00:14:42.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.769 "dma_device_type": 2 00:14:42.769 } 00:14:42.769 ], 00:14:42.769 "driver_specific": {} 00:14:42.769 } 00:14:42.769 ] 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.769 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.029 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.029 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.029 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.029 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.029 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.029 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.029 "name": "Existed_Raid", 00:14:43.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.029 "strip_size_kb": 64, 00:14:43.029 "state": "configuring", 00:14:43.029 "raid_level": "raid5f", 00:14:43.029 "superblock": false, 00:14:43.030 "num_base_bdevs": 4, 00:14:43.030 "num_base_bdevs_discovered": 2, 00:14:43.030 "num_base_bdevs_operational": 4, 00:14:43.030 "base_bdevs_list": [ 00:14:43.030 { 00:14:43.030 "name": "BaseBdev1", 00:14:43.030 "uuid": "634cb485-288c-450c-856d-b5fba7abeef3", 00:14:43.030 "is_configured": true, 00:14:43.030 "data_offset": 0, 00:14:43.030 "data_size": 65536 00:14:43.030 }, 00:14:43.030 { 00:14:43.030 "name": "BaseBdev2", 00:14:43.030 "uuid": "fb29ad4e-dfe4-4887-9c40-ca87e98453e6", 00:14:43.030 "is_configured": true, 00:14:43.030 "data_offset": 0, 00:14:43.030 "data_size": 65536 00:14:43.030 }, 00:14:43.030 { 00:14:43.030 "name": "BaseBdev3", 00:14:43.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.030 "is_configured": false, 00:14:43.030 "data_offset": 0, 00:14:43.030 "data_size": 0 00:14:43.030 }, 00:14:43.030 { 00:14:43.030 "name": "BaseBdev4", 00:14:43.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.030 "is_configured": false, 00:14:43.030 "data_offset": 0, 00:14:43.030 "data_size": 0 00:14:43.030 } 00:14:43.030 ] 00:14:43.030 }' 00:14:43.030 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.030 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.290 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:43.290 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.290 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.290 [2024-11-18 03:14:46.817644] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:43.290 BaseBdev3 00:14:43.290 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.290 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:43.290 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:43.290 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:43.290 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:43.290 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:43.290 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:43.290 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:43.290 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.290 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.290 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.290 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:43.290 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.290 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.290 [ 00:14:43.290 { 00:14:43.290 "name": "BaseBdev3", 00:14:43.290 "aliases": [ 00:14:43.290 "fd0f9072-1303-4b61-bc89-a2e7851c35e5" 00:14:43.290 ], 00:14:43.290 "product_name": "Malloc disk", 00:14:43.290 "block_size": 512, 00:14:43.290 "num_blocks": 65536, 00:14:43.290 "uuid": "fd0f9072-1303-4b61-bc89-a2e7851c35e5", 00:14:43.290 "assigned_rate_limits": { 00:14:43.290 "rw_ios_per_sec": 0, 00:14:43.290 "rw_mbytes_per_sec": 0, 00:14:43.290 "r_mbytes_per_sec": 0, 00:14:43.290 "w_mbytes_per_sec": 0 00:14:43.290 }, 00:14:43.290 "claimed": true, 00:14:43.290 "claim_type": "exclusive_write", 00:14:43.290 "zoned": false, 00:14:43.290 "supported_io_types": { 00:14:43.290 "read": true, 00:14:43.290 "write": true, 00:14:43.290 "unmap": true, 00:14:43.290 "flush": true, 00:14:43.290 "reset": true, 00:14:43.290 "nvme_admin": false, 00:14:43.290 "nvme_io": false, 00:14:43.290 "nvme_io_md": false, 00:14:43.290 "write_zeroes": true, 00:14:43.290 "zcopy": true, 00:14:43.290 "get_zone_info": false, 00:14:43.290 "zone_management": false, 00:14:43.290 "zone_append": false, 00:14:43.290 "compare": false, 00:14:43.290 "compare_and_write": false, 00:14:43.290 "abort": true, 00:14:43.290 "seek_hole": false, 00:14:43.290 "seek_data": false, 00:14:43.290 "copy": true, 00:14:43.290 "nvme_iov_md": false 00:14:43.290 }, 00:14:43.290 "memory_domains": [ 00:14:43.290 { 00:14:43.290 "dma_device_id": "system", 00:14:43.290 "dma_device_type": 1 00:14:43.290 }, 00:14:43.290 { 00:14:43.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.290 "dma_device_type": 2 00:14:43.290 } 00:14:43.290 ], 00:14:43.290 "driver_specific": {} 00:14:43.290 } 00:14:43.290 ] 00:14:43.290 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.290 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:43.290 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:43.290 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:43.290 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:43.290 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.290 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.290 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.290 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.290 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:43.290 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.290 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.290 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.550 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.550 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.550 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.550 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.551 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.551 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.551 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.551 "name": "Existed_Raid", 00:14:43.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.551 "strip_size_kb": 64, 00:14:43.551 "state": "configuring", 00:14:43.551 "raid_level": "raid5f", 00:14:43.551 "superblock": false, 00:14:43.551 "num_base_bdevs": 4, 00:14:43.551 "num_base_bdevs_discovered": 3, 00:14:43.551 "num_base_bdevs_operational": 4, 00:14:43.551 "base_bdevs_list": [ 00:14:43.551 { 00:14:43.551 "name": "BaseBdev1", 00:14:43.551 "uuid": "634cb485-288c-450c-856d-b5fba7abeef3", 00:14:43.551 "is_configured": true, 00:14:43.551 "data_offset": 0, 00:14:43.551 "data_size": 65536 00:14:43.551 }, 00:14:43.551 { 00:14:43.551 "name": "BaseBdev2", 00:14:43.551 "uuid": "fb29ad4e-dfe4-4887-9c40-ca87e98453e6", 00:14:43.551 "is_configured": true, 00:14:43.551 "data_offset": 0, 00:14:43.551 "data_size": 65536 00:14:43.551 }, 00:14:43.551 { 00:14:43.551 "name": "BaseBdev3", 00:14:43.551 "uuid": "fd0f9072-1303-4b61-bc89-a2e7851c35e5", 00:14:43.551 "is_configured": true, 00:14:43.551 "data_offset": 0, 00:14:43.551 "data_size": 65536 00:14:43.551 }, 00:14:43.551 { 00:14:43.551 "name": "BaseBdev4", 00:14:43.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.551 "is_configured": false, 00:14:43.551 "data_offset": 0, 00:14:43.551 "data_size": 0 00:14:43.551 } 00:14:43.551 ] 00:14:43.551 }' 00:14:43.551 03:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.551 03:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.811 [2024-11-18 03:14:47.311850] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:43.811 [2024-11-18 03:14:47.311996] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:43.811 [2024-11-18 03:14:47.312024] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:43.811 [2024-11-18 03:14:47.312320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:43.811 [2024-11-18 03:14:47.312795] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:43.811 [2024-11-18 03:14:47.312843] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:14:43.811 [2024-11-18 03:14:47.313116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.811 BaseBdev4 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.811 [ 00:14:43.811 { 00:14:43.811 "name": "BaseBdev4", 00:14:43.811 "aliases": [ 00:14:43.811 "70959e0e-5f23-448c-953b-89d2f70c0623" 00:14:43.811 ], 00:14:43.811 "product_name": "Malloc disk", 00:14:43.811 "block_size": 512, 00:14:43.811 "num_blocks": 65536, 00:14:43.811 "uuid": "70959e0e-5f23-448c-953b-89d2f70c0623", 00:14:43.811 "assigned_rate_limits": { 00:14:43.811 "rw_ios_per_sec": 0, 00:14:43.811 "rw_mbytes_per_sec": 0, 00:14:43.811 "r_mbytes_per_sec": 0, 00:14:43.811 "w_mbytes_per_sec": 0 00:14:43.811 }, 00:14:43.811 "claimed": true, 00:14:43.811 "claim_type": "exclusive_write", 00:14:43.811 "zoned": false, 00:14:43.811 "supported_io_types": { 00:14:43.811 "read": true, 00:14:43.811 "write": true, 00:14:43.811 "unmap": true, 00:14:43.811 "flush": true, 00:14:43.811 "reset": true, 00:14:43.811 "nvme_admin": false, 00:14:43.811 "nvme_io": false, 00:14:43.811 "nvme_io_md": false, 00:14:43.811 "write_zeroes": true, 00:14:43.811 "zcopy": true, 00:14:43.811 "get_zone_info": false, 00:14:43.811 "zone_management": false, 00:14:43.811 "zone_append": false, 00:14:43.811 "compare": false, 00:14:43.811 "compare_and_write": false, 00:14:43.811 "abort": true, 00:14:43.811 "seek_hole": false, 00:14:43.811 "seek_data": false, 00:14:43.811 "copy": true, 00:14:43.811 "nvme_iov_md": false 00:14:43.811 }, 00:14:43.811 "memory_domains": [ 00:14:43.811 { 00:14:43.811 "dma_device_id": "system", 00:14:43.811 "dma_device_type": 1 00:14:43.811 }, 00:14:43.811 { 00:14:43.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.811 "dma_device_type": 2 00:14:43.811 } 00:14:43.811 ], 00:14:43.811 "driver_specific": {} 00:14:43.811 } 00:14:43.811 ] 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.811 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.071 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.071 "name": "Existed_Raid", 00:14:44.071 "uuid": "008c89d7-77f4-406f-b21c-b26e72491b69", 00:14:44.071 "strip_size_kb": 64, 00:14:44.071 "state": "online", 00:14:44.071 "raid_level": "raid5f", 00:14:44.071 "superblock": false, 00:14:44.071 "num_base_bdevs": 4, 00:14:44.071 "num_base_bdevs_discovered": 4, 00:14:44.071 "num_base_bdevs_operational": 4, 00:14:44.071 "base_bdevs_list": [ 00:14:44.071 { 00:14:44.071 "name": "BaseBdev1", 00:14:44.071 "uuid": "634cb485-288c-450c-856d-b5fba7abeef3", 00:14:44.071 "is_configured": true, 00:14:44.071 "data_offset": 0, 00:14:44.071 "data_size": 65536 00:14:44.071 }, 00:14:44.071 { 00:14:44.071 "name": "BaseBdev2", 00:14:44.071 "uuid": "fb29ad4e-dfe4-4887-9c40-ca87e98453e6", 00:14:44.071 "is_configured": true, 00:14:44.071 "data_offset": 0, 00:14:44.071 "data_size": 65536 00:14:44.071 }, 00:14:44.071 { 00:14:44.071 "name": "BaseBdev3", 00:14:44.071 "uuid": "fd0f9072-1303-4b61-bc89-a2e7851c35e5", 00:14:44.071 "is_configured": true, 00:14:44.071 "data_offset": 0, 00:14:44.071 "data_size": 65536 00:14:44.071 }, 00:14:44.071 { 00:14:44.071 "name": "BaseBdev4", 00:14:44.071 "uuid": "70959e0e-5f23-448c-953b-89d2f70c0623", 00:14:44.071 "is_configured": true, 00:14:44.071 "data_offset": 0, 00:14:44.071 "data_size": 65536 00:14:44.071 } 00:14:44.071 ] 00:14:44.071 }' 00:14:44.071 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.071 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.332 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:44.332 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:44.332 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:44.332 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:44.332 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:44.332 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:44.332 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:44.332 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:44.332 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.332 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.332 [2024-11-18 03:14:47.763326] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:44.332 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.332 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:44.332 "name": "Existed_Raid", 00:14:44.332 "aliases": [ 00:14:44.332 "008c89d7-77f4-406f-b21c-b26e72491b69" 00:14:44.332 ], 00:14:44.332 "product_name": "Raid Volume", 00:14:44.332 "block_size": 512, 00:14:44.332 "num_blocks": 196608, 00:14:44.332 "uuid": "008c89d7-77f4-406f-b21c-b26e72491b69", 00:14:44.332 "assigned_rate_limits": { 00:14:44.332 "rw_ios_per_sec": 0, 00:14:44.332 "rw_mbytes_per_sec": 0, 00:14:44.332 "r_mbytes_per_sec": 0, 00:14:44.332 "w_mbytes_per_sec": 0 00:14:44.332 }, 00:14:44.332 "claimed": false, 00:14:44.332 "zoned": false, 00:14:44.332 "supported_io_types": { 00:14:44.332 "read": true, 00:14:44.332 "write": true, 00:14:44.332 "unmap": false, 00:14:44.332 "flush": false, 00:14:44.332 "reset": true, 00:14:44.332 "nvme_admin": false, 00:14:44.332 "nvme_io": false, 00:14:44.332 "nvme_io_md": false, 00:14:44.332 "write_zeroes": true, 00:14:44.332 "zcopy": false, 00:14:44.332 "get_zone_info": false, 00:14:44.332 "zone_management": false, 00:14:44.332 "zone_append": false, 00:14:44.332 "compare": false, 00:14:44.332 "compare_and_write": false, 00:14:44.332 "abort": false, 00:14:44.332 "seek_hole": false, 00:14:44.332 "seek_data": false, 00:14:44.332 "copy": false, 00:14:44.332 "nvme_iov_md": false 00:14:44.332 }, 00:14:44.332 "driver_specific": { 00:14:44.332 "raid": { 00:14:44.332 "uuid": "008c89d7-77f4-406f-b21c-b26e72491b69", 00:14:44.332 "strip_size_kb": 64, 00:14:44.332 "state": "online", 00:14:44.332 "raid_level": "raid5f", 00:14:44.332 "superblock": false, 00:14:44.332 "num_base_bdevs": 4, 00:14:44.332 "num_base_bdevs_discovered": 4, 00:14:44.332 "num_base_bdevs_operational": 4, 00:14:44.332 "base_bdevs_list": [ 00:14:44.332 { 00:14:44.332 "name": "BaseBdev1", 00:14:44.332 "uuid": "634cb485-288c-450c-856d-b5fba7abeef3", 00:14:44.332 "is_configured": true, 00:14:44.332 "data_offset": 0, 00:14:44.332 "data_size": 65536 00:14:44.332 }, 00:14:44.332 { 00:14:44.332 "name": "BaseBdev2", 00:14:44.332 "uuid": "fb29ad4e-dfe4-4887-9c40-ca87e98453e6", 00:14:44.332 "is_configured": true, 00:14:44.332 "data_offset": 0, 00:14:44.332 "data_size": 65536 00:14:44.332 }, 00:14:44.332 { 00:14:44.332 "name": "BaseBdev3", 00:14:44.332 "uuid": "fd0f9072-1303-4b61-bc89-a2e7851c35e5", 00:14:44.332 "is_configured": true, 00:14:44.332 "data_offset": 0, 00:14:44.332 "data_size": 65536 00:14:44.332 }, 00:14:44.332 { 00:14:44.332 "name": "BaseBdev4", 00:14:44.332 "uuid": "70959e0e-5f23-448c-953b-89d2f70c0623", 00:14:44.332 "is_configured": true, 00:14:44.332 "data_offset": 0, 00:14:44.332 "data_size": 65536 00:14:44.332 } 00:14:44.332 ] 00:14:44.332 } 00:14:44.332 } 00:14:44.332 }' 00:14:44.332 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:44.332 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:44.332 BaseBdev2 00:14:44.332 BaseBdev3 00:14:44.332 BaseBdev4' 00:14:44.332 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.332 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:44.332 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.332 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:44.332 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.332 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.332 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.592 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.592 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:44.592 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:44.592 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.592 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.592 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:44.592 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.592 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.592 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.592 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:44.592 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:44.592 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.592 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:44.592 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.592 03:14:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.592 03:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.592 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.592 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:44.592 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:44.592 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.592 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:44.592 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.592 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.592 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.592 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.592 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:44.592 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:44.593 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:44.593 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.593 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.593 [2024-11-18 03:14:48.102578] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:44.593 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.593 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:44.593 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:44.593 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:44.593 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:44.593 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:44.593 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:44.593 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.593 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.593 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.593 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.593 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.593 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.593 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.593 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.593 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.593 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.593 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.593 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.593 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.593 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.853 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.853 "name": "Existed_Raid", 00:14:44.853 "uuid": "008c89d7-77f4-406f-b21c-b26e72491b69", 00:14:44.853 "strip_size_kb": 64, 00:14:44.853 "state": "online", 00:14:44.853 "raid_level": "raid5f", 00:14:44.853 "superblock": false, 00:14:44.853 "num_base_bdevs": 4, 00:14:44.853 "num_base_bdevs_discovered": 3, 00:14:44.853 "num_base_bdevs_operational": 3, 00:14:44.853 "base_bdevs_list": [ 00:14:44.853 { 00:14:44.853 "name": null, 00:14:44.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.853 "is_configured": false, 00:14:44.853 "data_offset": 0, 00:14:44.853 "data_size": 65536 00:14:44.853 }, 00:14:44.853 { 00:14:44.853 "name": "BaseBdev2", 00:14:44.853 "uuid": "fb29ad4e-dfe4-4887-9c40-ca87e98453e6", 00:14:44.853 "is_configured": true, 00:14:44.853 "data_offset": 0, 00:14:44.853 "data_size": 65536 00:14:44.853 }, 00:14:44.853 { 00:14:44.853 "name": "BaseBdev3", 00:14:44.853 "uuid": "fd0f9072-1303-4b61-bc89-a2e7851c35e5", 00:14:44.853 "is_configured": true, 00:14:44.853 "data_offset": 0, 00:14:44.853 "data_size": 65536 00:14:44.853 }, 00:14:44.853 { 00:14:44.853 "name": "BaseBdev4", 00:14:44.853 "uuid": "70959e0e-5f23-448c-953b-89d2f70c0623", 00:14:44.853 "is_configured": true, 00:14:44.853 "data_offset": 0, 00:14:44.853 "data_size": 65536 00:14:44.853 } 00:14:44.853 ] 00:14:44.853 }' 00:14:44.853 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.853 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.113 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:45.113 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:45.113 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.113 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:45.113 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.113 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.113 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.113 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:45.113 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:45.113 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:45.113 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.113 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.113 [2024-11-18 03:14:48.569181] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:45.113 [2024-11-18 03:14:48.569321] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:45.113 [2024-11-18 03:14:48.580466] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.113 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.113 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:45.113 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:45.113 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.114 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.114 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.114 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:45.114 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.114 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:45.114 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:45.114 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:45.114 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.114 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.114 [2024-11-18 03:14:48.628408] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:45.114 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.114 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:45.114 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:45.114 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.114 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:45.114 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.114 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.114 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.375 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:45.375 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:45.375 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:45.375 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.375 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.375 [2024-11-18 03:14:48.699497] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:45.375 [2024-11-18 03:14:48.699591] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:14:45.375 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.375 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:45.375 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:45.375 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:45.375 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.375 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.375 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.375 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.375 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.376 BaseBdev2 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.376 [ 00:14:45.376 { 00:14:45.376 "name": "BaseBdev2", 00:14:45.376 "aliases": [ 00:14:45.376 "fb94ac70-7dbd-42ba-93e8-f3a83f7f7c30" 00:14:45.376 ], 00:14:45.376 "product_name": "Malloc disk", 00:14:45.376 "block_size": 512, 00:14:45.376 "num_blocks": 65536, 00:14:45.376 "uuid": "fb94ac70-7dbd-42ba-93e8-f3a83f7f7c30", 00:14:45.376 "assigned_rate_limits": { 00:14:45.376 "rw_ios_per_sec": 0, 00:14:45.376 "rw_mbytes_per_sec": 0, 00:14:45.376 "r_mbytes_per_sec": 0, 00:14:45.376 "w_mbytes_per_sec": 0 00:14:45.376 }, 00:14:45.376 "claimed": false, 00:14:45.376 "zoned": false, 00:14:45.376 "supported_io_types": { 00:14:45.376 "read": true, 00:14:45.376 "write": true, 00:14:45.376 "unmap": true, 00:14:45.376 "flush": true, 00:14:45.376 "reset": true, 00:14:45.376 "nvme_admin": false, 00:14:45.376 "nvme_io": false, 00:14:45.376 "nvme_io_md": false, 00:14:45.376 "write_zeroes": true, 00:14:45.376 "zcopy": true, 00:14:45.376 "get_zone_info": false, 00:14:45.376 "zone_management": false, 00:14:45.376 "zone_append": false, 00:14:45.376 "compare": false, 00:14:45.376 "compare_and_write": false, 00:14:45.376 "abort": true, 00:14:45.376 "seek_hole": false, 00:14:45.376 "seek_data": false, 00:14:45.376 "copy": true, 00:14:45.376 "nvme_iov_md": false 00:14:45.376 }, 00:14:45.376 "memory_domains": [ 00:14:45.376 { 00:14:45.376 "dma_device_id": "system", 00:14:45.376 "dma_device_type": 1 00:14:45.376 }, 00:14:45.376 { 00:14:45.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.376 "dma_device_type": 2 00:14:45.376 } 00:14:45.376 ], 00:14:45.376 "driver_specific": {} 00:14:45.376 } 00:14:45.376 ] 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.376 BaseBdev3 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.376 [ 00:14:45.376 { 00:14:45.376 "name": "BaseBdev3", 00:14:45.376 "aliases": [ 00:14:45.376 "0d17d3e4-e5f9-4fa8-9777-813dbd660c88" 00:14:45.376 ], 00:14:45.376 "product_name": "Malloc disk", 00:14:45.376 "block_size": 512, 00:14:45.376 "num_blocks": 65536, 00:14:45.376 "uuid": "0d17d3e4-e5f9-4fa8-9777-813dbd660c88", 00:14:45.376 "assigned_rate_limits": { 00:14:45.376 "rw_ios_per_sec": 0, 00:14:45.376 "rw_mbytes_per_sec": 0, 00:14:45.376 "r_mbytes_per_sec": 0, 00:14:45.376 "w_mbytes_per_sec": 0 00:14:45.376 }, 00:14:45.376 "claimed": false, 00:14:45.376 "zoned": false, 00:14:45.376 "supported_io_types": { 00:14:45.376 "read": true, 00:14:45.376 "write": true, 00:14:45.376 "unmap": true, 00:14:45.376 "flush": true, 00:14:45.376 "reset": true, 00:14:45.376 "nvme_admin": false, 00:14:45.376 "nvme_io": false, 00:14:45.376 "nvme_io_md": false, 00:14:45.376 "write_zeroes": true, 00:14:45.376 "zcopy": true, 00:14:45.376 "get_zone_info": false, 00:14:45.376 "zone_management": false, 00:14:45.376 "zone_append": false, 00:14:45.376 "compare": false, 00:14:45.376 "compare_and_write": false, 00:14:45.376 "abort": true, 00:14:45.376 "seek_hole": false, 00:14:45.376 "seek_data": false, 00:14:45.376 "copy": true, 00:14:45.376 "nvme_iov_md": false 00:14:45.376 }, 00:14:45.376 "memory_domains": [ 00:14:45.376 { 00:14:45.376 "dma_device_id": "system", 00:14:45.376 "dma_device_type": 1 00:14:45.376 }, 00:14:45.376 { 00:14:45.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.376 "dma_device_type": 2 00:14:45.376 } 00:14:45.376 ], 00:14:45.376 "driver_specific": {} 00:14:45.376 } 00:14:45.376 ] 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.376 BaseBdev4 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:45.376 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.377 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.377 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.377 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:45.377 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.377 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.377 [ 00:14:45.377 { 00:14:45.377 "name": "BaseBdev4", 00:14:45.377 "aliases": [ 00:14:45.377 "542bb5cf-8e50-4fe9-acb6-4d84c5eef610" 00:14:45.377 ], 00:14:45.377 "product_name": "Malloc disk", 00:14:45.377 "block_size": 512, 00:14:45.377 "num_blocks": 65536, 00:14:45.377 "uuid": "542bb5cf-8e50-4fe9-acb6-4d84c5eef610", 00:14:45.377 "assigned_rate_limits": { 00:14:45.377 "rw_ios_per_sec": 0, 00:14:45.377 "rw_mbytes_per_sec": 0, 00:14:45.377 "r_mbytes_per_sec": 0, 00:14:45.377 "w_mbytes_per_sec": 0 00:14:45.377 }, 00:14:45.377 "claimed": false, 00:14:45.377 "zoned": false, 00:14:45.377 "supported_io_types": { 00:14:45.377 "read": true, 00:14:45.377 "write": true, 00:14:45.377 "unmap": true, 00:14:45.377 "flush": true, 00:14:45.377 "reset": true, 00:14:45.377 "nvme_admin": false, 00:14:45.377 "nvme_io": false, 00:14:45.377 "nvme_io_md": false, 00:14:45.377 "write_zeroes": true, 00:14:45.377 "zcopy": true, 00:14:45.377 "get_zone_info": false, 00:14:45.377 "zone_management": false, 00:14:45.377 "zone_append": false, 00:14:45.377 "compare": false, 00:14:45.377 "compare_and_write": false, 00:14:45.377 "abort": true, 00:14:45.377 "seek_hole": false, 00:14:45.377 "seek_data": false, 00:14:45.377 "copy": true, 00:14:45.377 "nvme_iov_md": false 00:14:45.377 }, 00:14:45.377 "memory_domains": [ 00:14:45.377 { 00:14:45.377 "dma_device_id": "system", 00:14:45.377 "dma_device_type": 1 00:14:45.377 }, 00:14:45.377 { 00:14:45.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.377 "dma_device_type": 2 00:14:45.377 } 00:14:45.377 ], 00:14:45.377 "driver_specific": {} 00:14:45.377 } 00:14:45.377 ] 00:14:45.377 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.377 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:45.377 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:45.377 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:45.377 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:45.377 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.377 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.377 [2024-11-18 03:14:48.917400] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:45.377 [2024-11-18 03:14:48.917495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:45.377 [2024-11-18 03:14:48.917545] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:45.377 [2024-11-18 03:14:48.919442] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:45.377 [2024-11-18 03:14:48.919534] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:45.377 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.377 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:45.377 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.377 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.377 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.377 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.377 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:45.377 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.377 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.377 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.377 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.377 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.377 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.377 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.377 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.377 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.638 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.638 "name": "Existed_Raid", 00:14:45.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.638 "strip_size_kb": 64, 00:14:45.638 "state": "configuring", 00:14:45.638 "raid_level": "raid5f", 00:14:45.638 "superblock": false, 00:14:45.638 "num_base_bdevs": 4, 00:14:45.638 "num_base_bdevs_discovered": 3, 00:14:45.638 "num_base_bdevs_operational": 4, 00:14:45.638 "base_bdevs_list": [ 00:14:45.638 { 00:14:45.638 "name": "BaseBdev1", 00:14:45.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.638 "is_configured": false, 00:14:45.638 "data_offset": 0, 00:14:45.638 "data_size": 0 00:14:45.638 }, 00:14:45.638 { 00:14:45.638 "name": "BaseBdev2", 00:14:45.638 "uuid": "fb94ac70-7dbd-42ba-93e8-f3a83f7f7c30", 00:14:45.638 "is_configured": true, 00:14:45.638 "data_offset": 0, 00:14:45.638 "data_size": 65536 00:14:45.638 }, 00:14:45.638 { 00:14:45.638 "name": "BaseBdev3", 00:14:45.638 "uuid": "0d17d3e4-e5f9-4fa8-9777-813dbd660c88", 00:14:45.638 "is_configured": true, 00:14:45.638 "data_offset": 0, 00:14:45.638 "data_size": 65536 00:14:45.638 }, 00:14:45.638 { 00:14:45.638 "name": "BaseBdev4", 00:14:45.638 "uuid": "542bb5cf-8e50-4fe9-acb6-4d84c5eef610", 00:14:45.638 "is_configured": true, 00:14:45.638 "data_offset": 0, 00:14:45.638 "data_size": 65536 00:14:45.638 } 00:14:45.638 ] 00:14:45.638 }' 00:14:45.638 03:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.638 03:14:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.898 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:45.898 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.898 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.898 [2024-11-18 03:14:49.352636] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:45.898 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.898 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:45.898 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.898 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.898 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.898 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.898 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:45.898 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.898 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.898 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.898 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.898 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.898 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.898 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.898 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.898 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.898 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.898 "name": "Existed_Raid", 00:14:45.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.898 "strip_size_kb": 64, 00:14:45.898 "state": "configuring", 00:14:45.898 "raid_level": "raid5f", 00:14:45.898 "superblock": false, 00:14:45.898 "num_base_bdevs": 4, 00:14:45.898 "num_base_bdevs_discovered": 2, 00:14:45.898 "num_base_bdevs_operational": 4, 00:14:45.898 "base_bdevs_list": [ 00:14:45.898 { 00:14:45.898 "name": "BaseBdev1", 00:14:45.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.898 "is_configured": false, 00:14:45.898 "data_offset": 0, 00:14:45.898 "data_size": 0 00:14:45.898 }, 00:14:45.898 { 00:14:45.898 "name": null, 00:14:45.898 "uuid": "fb94ac70-7dbd-42ba-93e8-f3a83f7f7c30", 00:14:45.898 "is_configured": false, 00:14:45.898 "data_offset": 0, 00:14:45.898 "data_size": 65536 00:14:45.898 }, 00:14:45.898 { 00:14:45.898 "name": "BaseBdev3", 00:14:45.898 "uuid": "0d17d3e4-e5f9-4fa8-9777-813dbd660c88", 00:14:45.898 "is_configured": true, 00:14:45.898 "data_offset": 0, 00:14:45.898 "data_size": 65536 00:14:45.898 }, 00:14:45.898 { 00:14:45.898 "name": "BaseBdev4", 00:14:45.898 "uuid": "542bb5cf-8e50-4fe9-acb6-4d84c5eef610", 00:14:45.898 "is_configured": true, 00:14:45.898 "data_offset": 0, 00:14:45.898 "data_size": 65536 00:14:45.898 } 00:14:45.898 ] 00:14:45.898 }' 00:14:45.898 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.898 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.469 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:46.469 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.469 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.469 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.469 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.469 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:46.469 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:46.469 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.469 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.469 [2024-11-18 03:14:49.798885] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:46.469 BaseBdev1 00:14:46.469 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.469 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:46.469 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:46.469 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:46.469 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:46.469 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:46.469 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:46.469 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:46.469 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.469 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.469 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.469 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:46.469 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.469 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.469 [ 00:14:46.469 { 00:14:46.469 "name": "BaseBdev1", 00:14:46.469 "aliases": [ 00:14:46.469 "c1c2c455-5cc8-453a-8bf4-6da4ca0442f8" 00:14:46.470 ], 00:14:46.470 "product_name": "Malloc disk", 00:14:46.470 "block_size": 512, 00:14:46.470 "num_blocks": 65536, 00:14:46.470 "uuid": "c1c2c455-5cc8-453a-8bf4-6da4ca0442f8", 00:14:46.470 "assigned_rate_limits": { 00:14:46.470 "rw_ios_per_sec": 0, 00:14:46.470 "rw_mbytes_per_sec": 0, 00:14:46.470 "r_mbytes_per_sec": 0, 00:14:46.470 "w_mbytes_per_sec": 0 00:14:46.470 }, 00:14:46.470 "claimed": true, 00:14:46.470 "claim_type": "exclusive_write", 00:14:46.470 "zoned": false, 00:14:46.470 "supported_io_types": { 00:14:46.470 "read": true, 00:14:46.470 "write": true, 00:14:46.470 "unmap": true, 00:14:46.470 "flush": true, 00:14:46.470 "reset": true, 00:14:46.470 "nvme_admin": false, 00:14:46.470 "nvme_io": false, 00:14:46.470 "nvme_io_md": false, 00:14:46.470 "write_zeroes": true, 00:14:46.470 "zcopy": true, 00:14:46.470 "get_zone_info": false, 00:14:46.470 "zone_management": false, 00:14:46.470 "zone_append": false, 00:14:46.470 "compare": false, 00:14:46.470 "compare_and_write": false, 00:14:46.470 "abort": true, 00:14:46.470 "seek_hole": false, 00:14:46.470 "seek_data": false, 00:14:46.470 "copy": true, 00:14:46.470 "nvme_iov_md": false 00:14:46.470 }, 00:14:46.470 "memory_domains": [ 00:14:46.470 { 00:14:46.470 "dma_device_id": "system", 00:14:46.470 "dma_device_type": 1 00:14:46.470 }, 00:14:46.470 { 00:14:46.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.470 "dma_device_type": 2 00:14:46.470 } 00:14:46.470 ], 00:14:46.470 "driver_specific": {} 00:14:46.470 } 00:14:46.470 ] 00:14:46.470 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.470 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:46.470 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:46.470 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.470 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.470 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.470 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.470 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:46.470 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.470 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.470 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.470 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.470 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.470 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.470 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.470 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.470 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.470 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.470 "name": "Existed_Raid", 00:14:46.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.470 "strip_size_kb": 64, 00:14:46.470 "state": "configuring", 00:14:46.470 "raid_level": "raid5f", 00:14:46.470 "superblock": false, 00:14:46.470 "num_base_bdevs": 4, 00:14:46.470 "num_base_bdevs_discovered": 3, 00:14:46.470 "num_base_bdevs_operational": 4, 00:14:46.470 "base_bdevs_list": [ 00:14:46.470 { 00:14:46.470 "name": "BaseBdev1", 00:14:46.470 "uuid": "c1c2c455-5cc8-453a-8bf4-6da4ca0442f8", 00:14:46.470 "is_configured": true, 00:14:46.470 "data_offset": 0, 00:14:46.470 "data_size": 65536 00:14:46.470 }, 00:14:46.470 { 00:14:46.470 "name": null, 00:14:46.470 "uuid": "fb94ac70-7dbd-42ba-93e8-f3a83f7f7c30", 00:14:46.470 "is_configured": false, 00:14:46.470 "data_offset": 0, 00:14:46.470 "data_size": 65536 00:14:46.470 }, 00:14:46.470 { 00:14:46.470 "name": "BaseBdev3", 00:14:46.470 "uuid": "0d17d3e4-e5f9-4fa8-9777-813dbd660c88", 00:14:46.470 "is_configured": true, 00:14:46.470 "data_offset": 0, 00:14:46.470 "data_size": 65536 00:14:46.470 }, 00:14:46.470 { 00:14:46.470 "name": "BaseBdev4", 00:14:46.470 "uuid": "542bb5cf-8e50-4fe9-acb6-4d84c5eef610", 00:14:46.470 "is_configured": true, 00:14:46.470 "data_offset": 0, 00:14:46.470 "data_size": 65536 00:14:46.470 } 00:14:46.470 ] 00:14:46.470 }' 00:14:46.470 03:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.470 03:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.731 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:46.731 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.731 03:14:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.731 03:14:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.991 03:14:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.991 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:46.991 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:46.991 03:14:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.991 03:14:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.991 [2024-11-18 03:14:50.353981] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:46.991 03:14:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.991 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:46.991 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.991 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.991 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.991 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.991 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:46.991 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.991 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.991 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.991 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.991 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.991 03:14:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.991 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.991 03:14:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.991 03:14:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.991 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.991 "name": "Existed_Raid", 00:14:46.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.991 "strip_size_kb": 64, 00:14:46.991 "state": "configuring", 00:14:46.991 "raid_level": "raid5f", 00:14:46.991 "superblock": false, 00:14:46.991 "num_base_bdevs": 4, 00:14:46.991 "num_base_bdevs_discovered": 2, 00:14:46.991 "num_base_bdevs_operational": 4, 00:14:46.991 "base_bdevs_list": [ 00:14:46.991 { 00:14:46.991 "name": "BaseBdev1", 00:14:46.991 "uuid": "c1c2c455-5cc8-453a-8bf4-6da4ca0442f8", 00:14:46.991 "is_configured": true, 00:14:46.991 "data_offset": 0, 00:14:46.991 "data_size": 65536 00:14:46.991 }, 00:14:46.991 { 00:14:46.991 "name": null, 00:14:46.991 "uuid": "fb94ac70-7dbd-42ba-93e8-f3a83f7f7c30", 00:14:46.991 "is_configured": false, 00:14:46.991 "data_offset": 0, 00:14:46.991 "data_size": 65536 00:14:46.991 }, 00:14:46.991 { 00:14:46.991 "name": null, 00:14:46.991 "uuid": "0d17d3e4-e5f9-4fa8-9777-813dbd660c88", 00:14:46.991 "is_configured": false, 00:14:46.991 "data_offset": 0, 00:14:46.991 "data_size": 65536 00:14:46.991 }, 00:14:46.991 { 00:14:46.991 "name": "BaseBdev4", 00:14:46.991 "uuid": "542bb5cf-8e50-4fe9-acb6-4d84c5eef610", 00:14:46.991 "is_configured": true, 00:14:46.991 "data_offset": 0, 00:14:46.991 "data_size": 65536 00:14:46.991 } 00:14:46.991 ] 00:14:46.991 }' 00:14:46.991 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.991 03:14:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.252 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.252 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:47.252 03:14:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.252 03:14:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.252 03:14:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.512 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:47.512 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:47.512 03:14:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.512 03:14:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.512 [2024-11-18 03:14:50.857142] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:47.512 03:14:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.512 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:47.512 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.512 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.512 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.512 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.512 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:47.512 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.512 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.512 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.512 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.512 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.512 03:14:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.512 03:14:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.512 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.512 03:14:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.512 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.512 "name": "Existed_Raid", 00:14:47.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.512 "strip_size_kb": 64, 00:14:47.512 "state": "configuring", 00:14:47.512 "raid_level": "raid5f", 00:14:47.512 "superblock": false, 00:14:47.512 "num_base_bdevs": 4, 00:14:47.512 "num_base_bdevs_discovered": 3, 00:14:47.512 "num_base_bdevs_operational": 4, 00:14:47.512 "base_bdevs_list": [ 00:14:47.512 { 00:14:47.512 "name": "BaseBdev1", 00:14:47.512 "uuid": "c1c2c455-5cc8-453a-8bf4-6da4ca0442f8", 00:14:47.512 "is_configured": true, 00:14:47.512 "data_offset": 0, 00:14:47.512 "data_size": 65536 00:14:47.512 }, 00:14:47.512 { 00:14:47.512 "name": null, 00:14:47.512 "uuid": "fb94ac70-7dbd-42ba-93e8-f3a83f7f7c30", 00:14:47.512 "is_configured": false, 00:14:47.512 "data_offset": 0, 00:14:47.512 "data_size": 65536 00:14:47.512 }, 00:14:47.512 { 00:14:47.512 "name": "BaseBdev3", 00:14:47.512 "uuid": "0d17d3e4-e5f9-4fa8-9777-813dbd660c88", 00:14:47.512 "is_configured": true, 00:14:47.512 "data_offset": 0, 00:14:47.512 "data_size": 65536 00:14:47.512 }, 00:14:47.512 { 00:14:47.512 "name": "BaseBdev4", 00:14:47.512 "uuid": "542bb5cf-8e50-4fe9-acb6-4d84c5eef610", 00:14:47.512 "is_configured": true, 00:14:47.512 "data_offset": 0, 00:14:47.512 "data_size": 65536 00:14:47.512 } 00:14:47.512 ] 00:14:47.512 }' 00:14:47.512 03:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.512 03:14:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.773 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.773 03:14:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.773 03:14:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.773 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:47.773 03:14:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.034 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:48.034 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:48.034 03:14:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.034 03:14:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.034 [2024-11-18 03:14:51.356309] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:48.034 03:14:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.034 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:48.034 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.034 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.034 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.034 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.034 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:48.034 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.034 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.034 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.034 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.034 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.034 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.034 03:14:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.034 03:14:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.034 03:14:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.034 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.034 "name": "Existed_Raid", 00:14:48.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.034 "strip_size_kb": 64, 00:14:48.034 "state": "configuring", 00:14:48.034 "raid_level": "raid5f", 00:14:48.034 "superblock": false, 00:14:48.034 "num_base_bdevs": 4, 00:14:48.034 "num_base_bdevs_discovered": 2, 00:14:48.034 "num_base_bdevs_operational": 4, 00:14:48.034 "base_bdevs_list": [ 00:14:48.034 { 00:14:48.034 "name": null, 00:14:48.034 "uuid": "c1c2c455-5cc8-453a-8bf4-6da4ca0442f8", 00:14:48.034 "is_configured": false, 00:14:48.034 "data_offset": 0, 00:14:48.034 "data_size": 65536 00:14:48.034 }, 00:14:48.034 { 00:14:48.034 "name": null, 00:14:48.034 "uuid": "fb94ac70-7dbd-42ba-93e8-f3a83f7f7c30", 00:14:48.034 "is_configured": false, 00:14:48.034 "data_offset": 0, 00:14:48.034 "data_size": 65536 00:14:48.034 }, 00:14:48.034 { 00:14:48.034 "name": "BaseBdev3", 00:14:48.034 "uuid": "0d17d3e4-e5f9-4fa8-9777-813dbd660c88", 00:14:48.034 "is_configured": true, 00:14:48.034 "data_offset": 0, 00:14:48.034 "data_size": 65536 00:14:48.034 }, 00:14:48.034 { 00:14:48.034 "name": "BaseBdev4", 00:14:48.034 "uuid": "542bb5cf-8e50-4fe9-acb6-4d84c5eef610", 00:14:48.034 "is_configured": true, 00:14:48.034 "data_offset": 0, 00:14:48.034 "data_size": 65536 00:14:48.034 } 00:14:48.034 ] 00:14:48.034 }' 00:14:48.034 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.034 03:14:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.295 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.295 03:14:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.295 03:14:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.295 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:48.295 03:14:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.295 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:48.295 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:48.295 03:14:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.295 03:14:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.295 [2024-11-18 03:14:51.849903] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:48.295 03:14:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.295 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:48.295 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.295 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.295 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.295 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.295 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:48.295 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.296 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.296 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.296 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.296 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.296 03:14:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.296 03:14:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.296 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.556 03:14:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.556 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.556 "name": "Existed_Raid", 00:14:48.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.556 "strip_size_kb": 64, 00:14:48.556 "state": "configuring", 00:14:48.556 "raid_level": "raid5f", 00:14:48.556 "superblock": false, 00:14:48.556 "num_base_bdevs": 4, 00:14:48.556 "num_base_bdevs_discovered": 3, 00:14:48.556 "num_base_bdevs_operational": 4, 00:14:48.556 "base_bdevs_list": [ 00:14:48.556 { 00:14:48.556 "name": null, 00:14:48.556 "uuid": "c1c2c455-5cc8-453a-8bf4-6da4ca0442f8", 00:14:48.556 "is_configured": false, 00:14:48.556 "data_offset": 0, 00:14:48.556 "data_size": 65536 00:14:48.556 }, 00:14:48.556 { 00:14:48.556 "name": "BaseBdev2", 00:14:48.556 "uuid": "fb94ac70-7dbd-42ba-93e8-f3a83f7f7c30", 00:14:48.556 "is_configured": true, 00:14:48.556 "data_offset": 0, 00:14:48.556 "data_size": 65536 00:14:48.556 }, 00:14:48.556 { 00:14:48.556 "name": "BaseBdev3", 00:14:48.556 "uuid": "0d17d3e4-e5f9-4fa8-9777-813dbd660c88", 00:14:48.556 "is_configured": true, 00:14:48.556 "data_offset": 0, 00:14:48.556 "data_size": 65536 00:14:48.556 }, 00:14:48.556 { 00:14:48.556 "name": "BaseBdev4", 00:14:48.556 "uuid": "542bb5cf-8e50-4fe9-acb6-4d84c5eef610", 00:14:48.556 "is_configured": true, 00:14:48.556 "data_offset": 0, 00:14:48.556 "data_size": 65536 00:14:48.556 } 00:14:48.556 ] 00:14:48.556 }' 00:14:48.556 03:14:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.556 03:14:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.816 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.816 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.816 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.816 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:48.816 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.816 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:48.816 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:48.816 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.816 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.816 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.816 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.816 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c1c2c455-5cc8-453a-8bf4-6da4ca0442f8 00:14:48.816 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.816 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.077 [2024-11-18 03:14:52.399951] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:49.077 [2024-11-18 03:14:52.400079] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:49.077 [2024-11-18 03:14:52.400103] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:49.077 [2024-11-18 03:14:52.400355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:49.077 [2024-11-18 03:14:52.400803] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:49.077 [2024-11-18 03:14:52.400851] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:14:49.077 [2024-11-18 03:14:52.401066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.077 NewBaseBdev 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.077 [ 00:14:49.077 { 00:14:49.077 "name": "NewBaseBdev", 00:14:49.077 "aliases": [ 00:14:49.077 "c1c2c455-5cc8-453a-8bf4-6da4ca0442f8" 00:14:49.077 ], 00:14:49.077 "product_name": "Malloc disk", 00:14:49.077 "block_size": 512, 00:14:49.077 "num_blocks": 65536, 00:14:49.077 "uuid": "c1c2c455-5cc8-453a-8bf4-6da4ca0442f8", 00:14:49.077 "assigned_rate_limits": { 00:14:49.077 "rw_ios_per_sec": 0, 00:14:49.077 "rw_mbytes_per_sec": 0, 00:14:49.077 "r_mbytes_per_sec": 0, 00:14:49.077 "w_mbytes_per_sec": 0 00:14:49.077 }, 00:14:49.077 "claimed": true, 00:14:49.077 "claim_type": "exclusive_write", 00:14:49.077 "zoned": false, 00:14:49.077 "supported_io_types": { 00:14:49.077 "read": true, 00:14:49.077 "write": true, 00:14:49.077 "unmap": true, 00:14:49.077 "flush": true, 00:14:49.077 "reset": true, 00:14:49.077 "nvme_admin": false, 00:14:49.077 "nvme_io": false, 00:14:49.077 "nvme_io_md": false, 00:14:49.077 "write_zeroes": true, 00:14:49.077 "zcopy": true, 00:14:49.077 "get_zone_info": false, 00:14:49.077 "zone_management": false, 00:14:49.077 "zone_append": false, 00:14:49.077 "compare": false, 00:14:49.077 "compare_and_write": false, 00:14:49.077 "abort": true, 00:14:49.077 "seek_hole": false, 00:14:49.077 "seek_data": false, 00:14:49.077 "copy": true, 00:14:49.077 "nvme_iov_md": false 00:14:49.077 }, 00:14:49.077 "memory_domains": [ 00:14:49.077 { 00:14:49.077 "dma_device_id": "system", 00:14:49.077 "dma_device_type": 1 00:14:49.077 }, 00:14:49.077 { 00:14:49.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.077 "dma_device_type": 2 00:14:49.077 } 00:14:49.077 ], 00:14:49.077 "driver_specific": {} 00:14:49.077 } 00:14:49.077 ] 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.077 "name": "Existed_Raid", 00:14:49.077 "uuid": "a1029f41-b5d5-4a3b-b030-d3d665195a79", 00:14:49.077 "strip_size_kb": 64, 00:14:49.077 "state": "online", 00:14:49.077 "raid_level": "raid5f", 00:14:49.077 "superblock": false, 00:14:49.077 "num_base_bdevs": 4, 00:14:49.077 "num_base_bdevs_discovered": 4, 00:14:49.077 "num_base_bdevs_operational": 4, 00:14:49.077 "base_bdevs_list": [ 00:14:49.077 { 00:14:49.077 "name": "NewBaseBdev", 00:14:49.077 "uuid": "c1c2c455-5cc8-453a-8bf4-6da4ca0442f8", 00:14:49.077 "is_configured": true, 00:14:49.077 "data_offset": 0, 00:14:49.077 "data_size": 65536 00:14:49.077 }, 00:14:49.077 { 00:14:49.077 "name": "BaseBdev2", 00:14:49.077 "uuid": "fb94ac70-7dbd-42ba-93e8-f3a83f7f7c30", 00:14:49.077 "is_configured": true, 00:14:49.077 "data_offset": 0, 00:14:49.077 "data_size": 65536 00:14:49.077 }, 00:14:49.077 { 00:14:49.077 "name": "BaseBdev3", 00:14:49.077 "uuid": "0d17d3e4-e5f9-4fa8-9777-813dbd660c88", 00:14:49.077 "is_configured": true, 00:14:49.077 "data_offset": 0, 00:14:49.077 "data_size": 65536 00:14:49.077 }, 00:14:49.077 { 00:14:49.077 "name": "BaseBdev4", 00:14:49.077 "uuid": "542bb5cf-8e50-4fe9-acb6-4d84c5eef610", 00:14:49.077 "is_configured": true, 00:14:49.077 "data_offset": 0, 00:14:49.077 "data_size": 65536 00:14:49.077 } 00:14:49.077 ] 00:14:49.077 }' 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.077 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.337 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:49.337 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:49.337 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:49.337 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:49.337 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:49.337 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:49.337 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:49.337 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:49.337 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.337 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.337 [2024-11-18 03:14:52.839438] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:49.337 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.337 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:49.337 "name": "Existed_Raid", 00:14:49.337 "aliases": [ 00:14:49.337 "a1029f41-b5d5-4a3b-b030-d3d665195a79" 00:14:49.337 ], 00:14:49.337 "product_name": "Raid Volume", 00:14:49.337 "block_size": 512, 00:14:49.337 "num_blocks": 196608, 00:14:49.337 "uuid": "a1029f41-b5d5-4a3b-b030-d3d665195a79", 00:14:49.337 "assigned_rate_limits": { 00:14:49.337 "rw_ios_per_sec": 0, 00:14:49.337 "rw_mbytes_per_sec": 0, 00:14:49.337 "r_mbytes_per_sec": 0, 00:14:49.337 "w_mbytes_per_sec": 0 00:14:49.337 }, 00:14:49.337 "claimed": false, 00:14:49.337 "zoned": false, 00:14:49.337 "supported_io_types": { 00:14:49.337 "read": true, 00:14:49.337 "write": true, 00:14:49.337 "unmap": false, 00:14:49.337 "flush": false, 00:14:49.337 "reset": true, 00:14:49.337 "nvme_admin": false, 00:14:49.337 "nvme_io": false, 00:14:49.337 "nvme_io_md": false, 00:14:49.337 "write_zeroes": true, 00:14:49.337 "zcopy": false, 00:14:49.337 "get_zone_info": false, 00:14:49.337 "zone_management": false, 00:14:49.337 "zone_append": false, 00:14:49.337 "compare": false, 00:14:49.337 "compare_and_write": false, 00:14:49.337 "abort": false, 00:14:49.337 "seek_hole": false, 00:14:49.337 "seek_data": false, 00:14:49.337 "copy": false, 00:14:49.337 "nvme_iov_md": false 00:14:49.337 }, 00:14:49.337 "driver_specific": { 00:14:49.337 "raid": { 00:14:49.337 "uuid": "a1029f41-b5d5-4a3b-b030-d3d665195a79", 00:14:49.337 "strip_size_kb": 64, 00:14:49.337 "state": "online", 00:14:49.337 "raid_level": "raid5f", 00:14:49.337 "superblock": false, 00:14:49.337 "num_base_bdevs": 4, 00:14:49.337 "num_base_bdevs_discovered": 4, 00:14:49.337 "num_base_bdevs_operational": 4, 00:14:49.337 "base_bdevs_list": [ 00:14:49.337 { 00:14:49.337 "name": "NewBaseBdev", 00:14:49.337 "uuid": "c1c2c455-5cc8-453a-8bf4-6da4ca0442f8", 00:14:49.337 "is_configured": true, 00:14:49.337 "data_offset": 0, 00:14:49.337 "data_size": 65536 00:14:49.337 }, 00:14:49.337 { 00:14:49.337 "name": "BaseBdev2", 00:14:49.337 "uuid": "fb94ac70-7dbd-42ba-93e8-f3a83f7f7c30", 00:14:49.337 "is_configured": true, 00:14:49.337 "data_offset": 0, 00:14:49.337 "data_size": 65536 00:14:49.337 }, 00:14:49.337 { 00:14:49.337 "name": "BaseBdev3", 00:14:49.337 "uuid": "0d17d3e4-e5f9-4fa8-9777-813dbd660c88", 00:14:49.337 "is_configured": true, 00:14:49.337 "data_offset": 0, 00:14:49.337 "data_size": 65536 00:14:49.337 }, 00:14:49.337 { 00:14:49.337 "name": "BaseBdev4", 00:14:49.337 "uuid": "542bb5cf-8e50-4fe9-acb6-4d84c5eef610", 00:14:49.337 "is_configured": true, 00:14:49.337 "data_offset": 0, 00:14:49.337 "data_size": 65536 00:14:49.337 } 00:14:49.337 ] 00:14:49.337 } 00:14:49.337 } 00:14:49.337 }' 00:14:49.337 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:49.337 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:49.337 BaseBdev2 00:14:49.338 BaseBdev3 00:14:49.338 BaseBdev4' 00:14:49.598 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.598 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:49.598 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:49.598 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.598 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:49.598 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.598 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.598 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.598 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:49.598 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:49.598 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:49.598 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:49.598 03:14:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.598 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.598 03:14:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.598 03:14:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.598 03:14:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:49.598 03:14:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:49.598 03:14:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:49.598 03:14:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:49.598 03:14:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.598 03:14:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.598 03:14:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.598 03:14:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.598 03:14:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:49.598 03:14:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:49.598 03:14:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:49.598 03:14:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.598 03:14:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:49.598 03:14:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.598 03:14:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.598 03:14:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.598 03:14:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:49.598 03:14:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:49.598 03:14:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:49.598 03:14:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.598 03:14:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.598 [2024-11-18 03:14:53.146681] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:49.598 [2024-11-18 03:14:53.146750] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:49.598 [2024-11-18 03:14:53.146833] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:49.598 [2024-11-18 03:14:53.147101] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:49.598 [2024-11-18 03:14:53.147124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:14:49.598 03:14:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.598 03:14:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 93374 00:14:49.598 03:14:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 93374 ']' 00:14:49.598 03:14:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 93374 00:14:49.598 03:14:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:49.598 03:14:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:49.598 03:14:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93374 00:14:49.859 03:14:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:49.859 03:14:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:49.859 03:14:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93374' 00:14:49.859 killing process with pid 93374 00:14:49.859 03:14:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 93374 00:14:49.859 [2024-11-18 03:14:53.196808] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:49.859 03:14:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 93374 00:14:49.859 [2024-11-18 03:14:53.238223] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:50.119 00:14:50.119 real 0m9.667s 00:14:50.119 user 0m16.465s 00:14:50.119 sys 0m2.093s 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.119 ************************************ 00:14:50.119 END TEST raid5f_state_function_test 00:14:50.119 ************************************ 00:14:50.119 03:14:53 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:14:50.119 03:14:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:50.119 03:14:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:50.119 03:14:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:50.119 ************************************ 00:14:50.119 START TEST raid5f_state_function_test_sb 00:14:50.119 ************************************ 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=94025 00:14:50.119 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:50.120 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 94025' 00:14:50.120 Process raid pid: 94025 00:14:50.120 03:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 94025 00:14:50.120 03:14:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 94025 ']' 00:14:50.120 03:14:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.120 03:14:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:50.120 03:14:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.120 03:14:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:50.120 03:14:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.120 [2024-11-18 03:14:53.654692] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:50.120 [2024-11-18 03:14:53.654907] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.380 [2024-11-18 03:14:53.814333] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.380 [2024-11-18 03:14:53.864638] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.380 [2024-11-18 03:14:53.906573] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:50.380 [2024-11-18 03:14:53.906695] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:50.950 03:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:50.950 03:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:50.951 03:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:50.951 03:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.951 03:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.951 [2024-11-18 03:14:54.507928] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:50.951 [2024-11-18 03:14:54.508029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:50.951 [2024-11-18 03:14:54.508062] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:50.951 [2024-11-18 03:14:54.508085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:50.951 [2024-11-18 03:14:54.508103] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:50.951 [2024-11-18 03:14:54.508127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:50.951 [2024-11-18 03:14:54.508145] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:50.951 [2024-11-18 03:14:54.508165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:50.951 03:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.951 03:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:50.951 03:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.951 03:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.951 03:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.951 03:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.951 03:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:50.951 03:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.951 03:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.951 03:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.951 03:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.951 03:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.951 03:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.951 03:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.951 03:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.211 03:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.211 03:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.211 "name": "Existed_Raid", 00:14:51.211 "uuid": "3136e5a0-7a59-4095-85f7-37029b584013", 00:14:51.211 "strip_size_kb": 64, 00:14:51.211 "state": "configuring", 00:14:51.211 "raid_level": "raid5f", 00:14:51.211 "superblock": true, 00:14:51.211 "num_base_bdevs": 4, 00:14:51.211 "num_base_bdevs_discovered": 0, 00:14:51.211 "num_base_bdevs_operational": 4, 00:14:51.211 "base_bdevs_list": [ 00:14:51.211 { 00:14:51.211 "name": "BaseBdev1", 00:14:51.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.211 "is_configured": false, 00:14:51.211 "data_offset": 0, 00:14:51.211 "data_size": 0 00:14:51.211 }, 00:14:51.211 { 00:14:51.211 "name": "BaseBdev2", 00:14:51.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.211 "is_configured": false, 00:14:51.211 "data_offset": 0, 00:14:51.211 "data_size": 0 00:14:51.211 }, 00:14:51.211 { 00:14:51.211 "name": "BaseBdev3", 00:14:51.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.211 "is_configured": false, 00:14:51.211 "data_offset": 0, 00:14:51.211 "data_size": 0 00:14:51.211 }, 00:14:51.211 { 00:14:51.211 "name": "BaseBdev4", 00:14:51.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.211 "is_configured": false, 00:14:51.211 "data_offset": 0, 00:14:51.211 "data_size": 0 00:14:51.211 } 00:14:51.211 ] 00:14:51.211 }' 00:14:51.211 03:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.211 03:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.471 03:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:51.471 03:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.471 03:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.471 [2024-11-18 03:14:54.947076] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:51.471 [2024-11-18 03:14:54.947158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:14:51.471 03:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.471 03:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:51.471 03:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.471 03:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.471 [2024-11-18 03:14:54.959105] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:51.471 [2024-11-18 03:14:54.959179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:51.471 [2024-11-18 03:14:54.959205] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:51.471 [2024-11-18 03:14:54.959227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:51.471 [2024-11-18 03:14:54.959245] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:51.471 [2024-11-18 03:14:54.959266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:51.471 [2024-11-18 03:14:54.959283] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:51.471 [2024-11-18 03:14:54.959303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:51.471 03:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.471 03:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:51.471 03:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.471 03:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.471 [2024-11-18 03:14:54.979879] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:51.471 BaseBdev1 00:14:51.471 03:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.471 03:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:51.471 03:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:51.471 03:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:51.471 03:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:51.471 03:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:51.471 03:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:51.471 03:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:51.471 03:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.471 03:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.471 03:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.471 03:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:51.471 03:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.471 03:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.471 [ 00:14:51.471 { 00:14:51.471 "name": "BaseBdev1", 00:14:51.471 "aliases": [ 00:14:51.471 "be46089d-ee1c-45e3-bcbe-09344ee78695" 00:14:51.471 ], 00:14:51.471 "product_name": "Malloc disk", 00:14:51.471 "block_size": 512, 00:14:51.471 "num_blocks": 65536, 00:14:51.471 "uuid": "be46089d-ee1c-45e3-bcbe-09344ee78695", 00:14:51.471 "assigned_rate_limits": { 00:14:51.471 "rw_ios_per_sec": 0, 00:14:51.471 "rw_mbytes_per_sec": 0, 00:14:51.471 "r_mbytes_per_sec": 0, 00:14:51.471 "w_mbytes_per_sec": 0 00:14:51.471 }, 00:14:51.471 "claimed": true, 00:14:51.471 "claim_type": "exclusive_write", 00:14:51.471 "zoned": false, 00:14:51.471 "supported_io_types": { 00:14:51.471 "read": true, 00:14:51.471 "write": true, 00:14:51.471 "unmap": true, 00:14:51.471 "flush": true, 00:14:51.471 "reset": true, 00:14:51.471 "nvme_admin": false, 00:14:51.471 "nvme_io": false, 00:14:51.471 "nvme_io_md": false, 00:14:51.471 "write_zeroes": true, 00:14:51.471 "zcopy": true, 00:14:51.472 "get_zone_info": false, 00:14:51.472 "zone_management": false, 00:14:51.472 "zone_append": false, 00:14:51.472 "compare": false, 00:14:51.472 "compare_and_write": false, 00:14:51.472 "abort": true, 00:14:51.472 "seek_hole": false, 00:14:51.472 "seek_data": false, 00:14:51.472 "copy": true, 00:14:51.472 "nvme_iov_md": false 00:14:51.472 }, 00:14:51.472 "memory_domains": [ 00:14:51.472 { 00:14:51.472 "dma_device_id": "system", 00:14:51.472 "dma_device_type": 1 00:14:51.472 }, 00:14:51.472 { 00:14:51.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.472 "dma_device_type": 2 00:14:51.472 } 00:14:51.472 ], 00:14:51.472 "driver_specific": {} 00:14:51.472 } 00:14:51.472 ] 00:14:51.472 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.472 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:51.472 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:51.472 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.472 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.472 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.472 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.472 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:51.472 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.472 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.472 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.472 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.472 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.472 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.472 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.472 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.731 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.731 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.731 "name": "Existed_Raid", 00:14:51.731 "uuid": "4353d30b-0ea2-468a-b040-4339584e4b16", 00:14:51.731 "strip_size_kb": 64, 00:14:51.731 "state": "configuring", 00:14:51.731 "raid_level": "raid5f", 00:14:51.731 "superblock": true, 00:14:51.731 "num_base_bdevs": 4, 00:14:51.731 "num_base_bdevs_discovered": 1, 00:14:51.731 "num_base_bdevs_operational": 4, 00:14:51.731 "base_bdevs_list": [ 00:14:51.731 { 00:14:51.731 "name": "BaseBdev1", 00:14:51.731 "uuid": "be46089d-ee1c-45e3-bcbe-09344ee78695", 00:14:51.731 "is_configured": true, 00:14:51.731 "data_offset": 2048, 00:14:51.731 "data_size": 63488 00:14:51.731 }, 00:14:51.731 { 00:14:51.731 "name": "BaseBdev2", 00:14:51.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.731 "is_configured": false, 00:14:51.731 "data_offset": 0, 00:14:51.731 "data_size": 0 00:14:51.731 }, 00:14:51.731 { 00:14:51.731 "name": "BaseBdev3", 00:14:51.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.731 "is_configured": false, 00:14:51.731 "data_offset": 0, 00:14:51.731 "data_size": 0 00:14:51.731 }, 00:14:51.731 { 00:14:51.731 "name": "BaseBdev4", 00:14:51.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.731 "is_configured": false, 00:14:51.731 "data_offset": 0, 00:14:51.731 "data_size": 0 00:14:51.731 } 00:14:51.731 ] 00:14:51.731 }' 00:14:51.731 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.731 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.991 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:51.991 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.991 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.991 [2024-11-18 03:14:55.447128] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:51.991 [2024-11-18 03:14:55.447241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:14:51.991 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.991 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:51.991 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.991 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.991 [2024-11-18 03:14:55.459141] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:51.991 [2024-11-18 03:14:55.460991] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:51.991 [2024-11-18 03:14:55.461065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:51.991 [2024-11-18 03:14:55.461093] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:51.991 [2024-11-18 03:14:55.461116] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:51.991 [2024-11-18 03:14:55.461134] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:51.991 [2024-11-18 03:14:55.461153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:51.991 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.991 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:51.991 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:51.991 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:51.991 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.991 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.991 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.991 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.991 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:51.991 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.991 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.991 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.991 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.991 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.991 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.991 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.991 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.991 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.991 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.991 "name": "Existed_Raid", 00:14:51.991 "uuid": "68dc4750-746b-4d70-a27a-4a7dac72d754", 00:14:51.991 "strip_size_kb": 64, 00:14:51.992 "state": "configuring", 00:14:51.992 "raid_level": "raid5f", 00:14:51.992 "superblock": true, 00:14:51.992 "num_base_bdevs": 4, 00:14:51.992 "num_base_bdevs_discovered": 1, 00:14:51.992 "num_base_bdevs_operational": 4, 00:14:51.992 "base_bdevs_list": [ 00:14:51.992 { 00:14:51.992 "name": "BaseBdev1", 00:14:51.992 "uuid": "be46089d-ee1c-45e3-bcbe-09344ee78695", 00:14:51.992 "is_configured": true, 00:14:51.992 "data_offset": 2048, 00:14:51.992 "data_size": 63488 00:14:51.992 }, 00:14:51.992 { 00:14:51.992 "name": "BaseBdev2", 00:14:51.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.992 "is_configured": false, 00:14:51.992 "data_offset": 0, 00:14:51.992 "data_size": 0 00:14:51.992 }, 00:14:51.992 { 00:14:51.992 "name": "BaseBdev3", 00:14:51.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.992 "is_configured": false, 00:14:51.992 "data_offset": 0, 00:14:51.992 "data_size": 0 00:14:51.992 }, 00:14:51.992 { 00:14:51.992 "name": "BaseBdev4", 00:14:51.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.992 "is_configured": false, 00:14:51.992 "data_offset": 0, 00:14:51.992 "data_size": 0 00:14:51.992 } 00:14:51.992 ] 00:14:51.992 }' 00:14:51.992 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.992 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.561 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:52.561 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.561 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.561 [2024-11-18 03:14:55.884523] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:52.561 BaseBdev2 00:14:52.561 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.561 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:52.561 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:52.561 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:52.561 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:52.561 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:52.561 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:52.562 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:52.562 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.562 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.562 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.562 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:52.562 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.562 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.562 [ 00:14:52.562 { 00:14:52.562 "name": "BaseBdev2", 00:14:52.562 "aliases": [ 00:14:52.562 "d0728b86-f11d-4b8e-a1eb-0edb1a2913af" 00:14:52.562 ], 00:14:52.562 "product_name": "Malloc disk", 00:14:52.562 "block_size": 512, 00:14:52.562 "num_blocks": 65536, 00:14:52.562 "uuid": "d0728b86-f11d-4b8e-a1eb-0edb1a2913af", 00:14:52.562 "assigned_rate_limits": { 00:14:52.562 "rw_ios_per_sec": 0, 00:14:52.562 "rw_mbytes_per_sec": 0, 00:14:52.562 "r_mbytes_per_sec": 0, 00:14:52.562 "w_mbytes_per_sec": 0 00:14:52.562 }, 00:14:52.562 "claimed": true, 00:14:52.562 "claim_type": "exclusive_write", 00:14:52.562 "zoned": false, 00:14:52.562 "supported_io_types": { 00:14:52.562 "read": true, 00:14:52.562 "write": true, 00:14:52.562 "unmap": true, 00:14:52.562 "flush": true, 00:14:52.562 "reset": true, 00:14:52.562 "nvme_admin": false, 00:14:52.562 "nvme_io": false, 00:14:52.562 "nvme_io_md": false, 00:14:52.562 "write_zeroes": true, 00:14:52.562 "zcopy": true, 00:14:52.562 "get_zone_info": false, 00:14:52.562 "zone_management": false, 00:14:52.562 "zone_append": false, 00:14:52.562 "compare": false, 00:14:52.562 "compare_and_write": false, 00:14:52.562 "abort": true, 00:14:52.562 "seek_hole": false, 00:14:52.562 "seek_data": false, 00:14:52.562 "copy": true, 00:14:52.562 "nvme_iov_md": false 00:14:52.562 }, 00:14:52.562 "memory_domains": [ 00:14:52.562 { 00:14:52.562 "dma_device_id": "system", 00:14:52.562 "dma_device_type": 1 00:14:52.562 }, 00:14:52.562 { 00:14:52.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.562 "dma_device_type": 2 00:14:52.562 } 00:14:52.562 ], 00:14:52.562 "driver_specific": {} 00:14:52.562 } 00:14:52.562 ] 00:14:52.562 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.562 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:52.562 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:52.562 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:52.562 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:52.562 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.562 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.562 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.562 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.562 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.562 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.562 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.562 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.562 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.562 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.562 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.562 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.562 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.562 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.562 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.562 "name": "Existed_Raid", 00:14:52.562 "uuid": "68dc4750-746b-4d70-a27a-4a7dac72d754", 00:14:52.562 "strip_size_kb": 64, 00:14:52.562 "state": "configuring", 00:14:52.562 "raid_level": "raid5f", 00:14:52.562 "superblock": true, 00:14:52.562 "num_base_bdevs": 4, 00:14:52.562 "num_base_bdevs_discovered": 2, 00:14:52.562 "num_base_bdevs_operational": 4, 00:14:52.562 "base_bdevs_list": [ 00:14:52.562 { 00:14:52.562 "name": "BaseBdev1", 00:14:52.562 "uuid": "be46089d-ee1c-45e3-bcbe-09344ee78695", 00:14:52.562 "is_configured": true, 00:14:52.562 "data_offset": 2048, 00:14:52.562 "data_size": 63488 00:14:52.562 }, 00:14:52.562 { 00:14:52.562 "name": "BaseBdev2", 00:14:52.562 "uuid": "d0728b86-f11d-4b8e-a1eb-0edb1a2913af", 00:14:52.562 "is_configured": true, 00:14:52.562 "data_offset": 2048, 00:14:52.562 "data_size": 63488 00:14:52.562 }, 00:14:52.562 { 00:14:52.562 "name": "BaseBdev3", 00:14:52.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.562 "is_configured": false, 00:14:52.562 "data_offset": 0, 00:14:52.562 "data_size": 0 00:14:52.562 }, 00:14:52.562 { 00:14:52.562 "name": "BaseBdev4", 00:14:52.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.562 "is_configured": false, 00:14:52.562 "data_offset": 0, 00:14:52.562 "data_size": 0 00:14:52.562 } 00:14:52.562 ] 00:14:52.562 }' 00:14:52.562 03:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.562 03:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.822 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:52.822 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.822 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.822 [2024-11-18 03:14:56.350749] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:52.822 BaseBdev3 00:14:52.822 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.822 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:52.822 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:52.822 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:52.822 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:52.822 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:52.822 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:52.822 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:52.822 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.822 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.822 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.822 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:52.822 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.822 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.822 [ 00:14:52.822 { 00:14:52.822 "name": "BaseBdev3", 00:14:52.822 "aliases": [ 00:14:52.822 "bb90c842-d6dd-4714-bd4f-8b434342f6b6" 00:14:52.822 ], 00:14:52.822 "product_name": "Malloc disk", 00:14:52.822 "block_size": 512, 00:14:52.822 "num_blocks": 65536, 00:14:52.822 "uuid": "bb90c842-d6dd-4714-bd4f-8b434342f6b6", 00:14:52.823 "assigned_rate_limits": { 00:14:52.823 "rw_ios_per_sec": 0, 00:14:52.823 "rw_mbytes_per_sec": 0, 00:14:52.823 "r_mbytes_per_sec": 0, 00:14:52.823 "w_mbytes_per_sec": 0 00:14:52.823 }, 00:14:52.823 "claimed": true, 00:14:52.823 "claim_type": "exclusive_write", 00:14:52.823 "zoned": false, 00:14:52.823 "supported_io_types": { 00:14:52.823 "read": true, 00:14:52.823 "write": true, 00:14:52.823 "unmap": true, 00:14:52.823 "flush": true, 00:14:52.823 "reset": true, 00:14:52.823 "nvme_admin": false, 00:14:52.823 "nvme_io": false, 00:14:52.823 "nvme_io_md": false, 00:14:52.823 "write_zeroes": true, 00:14:52.823 "zcopy": true, 00:14:52.823 "get_zone_info": false, 00:14:52.823 "zone_management": false, 00:14:52.823 "zone_append": false, 00:14:52.823 "compare": false, 00:14:52.823 "compare_and_write": false, 00:14:52.823 "abort": true, 00:14:52.823 "seek_hole": false, 00:14:52.823 "seek_data": false, 00:14:52.823 "copy": true, 00:14:52.823 "nvme_iov_md": false 00:14:52.823 }, 00:14:52.823 "memory_domains": [ 00:14:52.823 { 00:14:52.823 "dma_device_id": "system", 00:14:52.823 "dma_device_type": 1 00:14:52.823 }, 00:14:52.823 { 00:14:52.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.823 "dma_device_type": 2 00:14:52.823 } 00:14:52.823 ], 00:14:52.823 "driver_specific": {} 00:14:52.823 } 00:14:52.823 ] 00:14:52.823 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.823 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:52.823 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:52.823 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:52.823 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:52.823 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.823 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.823 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.823 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.823 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.823 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.823 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.823 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.823 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.823 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.823 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.083 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.083 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.083 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.083 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.083 "name": "Existed_Raid", 00:14:53.083 "uuid": "68dc4750-746b-4d70-a27a-4a7dac72d754", 00:14:53.083 "strip_size_kb": 64, 00:14:53.083 "state": "configuring", 00:14:53.083 "raid_level": "raid5f", 00:14:53.083 "superblock": true, 00:14:53.083 "num_base_bdevs": 4, 00:14:53.083 "num_base_bdevs_discovered": 3, 00:14:53.083 "num_base_bdevs_operational": 4, 00:14:53.083 "base_bdevs_list": [ 00:14:53.083 { 00:14:53.083 "name": "BaseBdev1", 00:14:53.083 "uuid": "be46089d-ee1c-45e3-bcbe-09344ee78695", 00:14:53.083 "is_configured": true, 00:14:53.083 "data_offset": 2048, 00:14:53.083 "data_size": 63488 00:14:53.083 }, 00:14:53.083 { 00:14:53.083 "name": "BaseBdev2", 00:14:53.083 "uuid": "d0728b86-f11d-4b8e-a1eb-0edb1a2913af", 00:14:53.083 "is_configured": true, 00:14:53.083 "data_offset": 2048, 00:14:53.083 "data_size": 63488 00:14:53.083 }, 00:14:53.083 { 00:14:53.083 "name": "BaseBdev3", 00:14:53.083 "uuid": "bb90c842-d6dd-4714-bd4f-8b434342f6b6", 00:14:53.083 "is_configured": true, 00:14:53.083 "data_offset": 2048, 00:14:53.083 "data_size": 63488 00:14:53.083 }, 00:14:53.083 { 00:14:53.083 "name": "BaseBdev4", 00:14:53.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.083 "is_configured": false, 00:14:53.083 "data_offset": 0, 00:14:53.083 "data_size": 0 00:14:53.083 } 00:14:53.083 ] 00:14:53.083 }' 00:14:53.083 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.083 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.344 [2024-11-18 03:14:56.841030] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:53.344 [2024-11-18 03:14:56.841335] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:53.344 [2024-11-18 03:14:56.841394] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:53.344 [2024-11-18 03:14:56.841659] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:53.344 BaseBdev4 00:14:53.344 [2024-11-18 03:14:56.842145] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:53.344 [2024-11-18 03:14:56.842166] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:14:53.344 [2024-11-18 03:14:56.842281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.344 [ 00:14:53.344 { 00:14:53.344 "name": "BaseBdev4", 00:14:53.344 "aliases": [ 00:14:53.344 "f3e67b4f-b9ca-4b84-8b8e-d8a07f2b980d" 00:14:53.344 ], 00:14:53.344 "product_name": "Malloc disk", 00:14:53.344 "block_size": 512, 00:14:53.344 "num_blocks": 65536, 00:14:53.344 "uuid": "f3e67b4f-b9ca-4b84-8b8e-d8a07f2b980d", 00:14:53.344 "assigned_rate_limits": { 00:14:53.344 "rw_ios_per_sec": 0, 00:14:53.344 "rw_mbytes_per_sec": 0, 00:14:53.344 "r_mbytes_per_sec": 0, 00:14:53.344 "w_mbytes_per_sec": 0 00:14:53.344 }, 00:14:53.344 "claimed": true, 00:14:53.344 "claim_type": "exclusive_write", 00:14:53.344 "zoned": false, 00:14:53.344 "supported_io_types": { 00:14:53.344 "read": true, 00:14:53.344 "write": true, 00:14:53.344 "unmap": true, 00:14:53.344 "flush": true, 00:14:53.344 "reset": true, 00:14:53.344 "nvme_admin": false, 00:14:53.344 "nvme_io": false, 00:14:53.344 "nvme_io_md": false, 00:14:53.344 "write_zeroes": true, 00:14:53.344 "zcopy": true, 00:14:53.344 "get_zone_info": false, 00:14:53.344 "zone_management": false, 00:14:53.344 "zone_append": false, 00:14:53.344 "compare": false, 00:14:53.344 "compare_and_write": false, 00:14:53.344 "abort": true, 00:14:53.344 "seek_hole": false, 00:14:53.344 "seek_data": false, 00:14:53.344 "copy": true, 00:14:53.344 "nvme_iov_md": false 00:14:53.344 }, 00:14:53.344 "memory_domains": [ 00:14:53.344 { 00:14:53.344 "dma_device_id": "system", 00:14:53.344 "dma_device_type": 1 00:14:53.344 }, 00:14:53.344 { 00:14:53.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.344 "dma_device_type": 2 00:14:53.344 } 00:14:53.344 ], 00:14:53.344 "driver_specific": {} 00:14:53.344 } 00:14:53.344 ] 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.344 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.604 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.605 "name": "Existed_Raid", 00:14:53.605 "uuid": "68dc4750-746b-4d70-a27a-4a7dac72d754", 00:14:53.605 "strip_size_kb": 64, 00:14:53.605 "state": "online", 00:14:53.605 "raid_level": "raid5f", 00:14:53.605 "superblock": true, 00:14:53.605 "num_base_bdevs": 4, 00:14:53.605 "num_base_bdevs_discovered": 4, 00:14:53.605 "num_base_bdevs_operational": 4, 00:14:53.605 "base_bdevs_list": [ 00:14:53.605 { 00:14:53.605 "name": "BaseBdev1", 00:14:53.605 "uuid": "be46089d-ee1c-45e3-bcbe-09344ee78695", 00:14:53.605 "is_configured": true, 00:14:53.605 "data_offset": 2048, 00:14:53.605 "data_size": 63488 00:14:53.605 }, 00:14:53.605 { 00:14:53.605 "name": "BaseBdev2", 00:14:53.605 "uuid": "d0728b86-f11d-4b8e-a1eb-0edb1a2913af", 00:14:53.605 "is_configured": true, 00:14:53.605 "data_offset": 2048, 00:14:53.605 "data_size": 63488 00:14:53.605 }, 00:14:53.605 { 00:14:53.605 "name": "BaseBdev3", 00:14:53.605 "uuid": "bb90c842-d6dd-4714-bd4f-8b434342f6b6", 00:14:53.605 "is_configured": true, 00:14:53.605 "data_offset": 2048, 00:14:53.605 "data_size": 63488 00:14:53.605 }, 00:14:53.605 { 00:14:53.605 "name": "BaseBdev4", 00:14:53.605 "uuid": "f3e67b4f-b9ca-4b84-8b8e-d8a07f2b980d", 00:14:53.605 "is_configured": true, 00:14:53.605 "data_offset": 2048, 00:14:53.605 "data_size": 63488 00:14:53.605 } 00:14:53.605 ] 00:14:53.605 }' 00:14:53.605 03:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.605 03:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.864 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:53.864 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:53.864 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:53.865 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:53.865 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:53.865 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:53.865 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:53.865 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:53.865 03:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.865 03:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.865 [2024-11-18 03:14:57.332464] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.865 03:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.865 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:53.865 "name": "Existed_Raid", 00:14:53.865 "aliases": [ 00:14:53.865 "68dc4750-746b-4d70-a27a-4a7dac72d754" 00:14:53.865 ], 00:14:53.865 "product_name": "Raid Volume", 00:14:53.865 "block_size": 512, 00:14:53.865 "num_blocks": 190464, 00:14:53.865 "uuid": "68dc4750-746b-4d70-a27a-4a7dac72d754", 00:14:53.865 "assigned_rate_limits": { 00:14:53.865 "rw_ios_per_sec": 0, 00:14:53.865 "rw_mbytes_per_sec": 0, 00:14:53.865 "r_mbytes_per_sec": 0, 00:14:53.865 "w_mbytes_per_sec": 0 00:14:53.865 }, 00:14:53.865 "claimed": false, 00:14:53.865 "zoned": false, 00:14:53.865 "supported_io_types": { 00:14:53.865 "read": true, 00:14:53.865 "write": true, 00:14:53.865 "unmap": false, 00:14:53.865 "flush": false, 00:14:53.865 "reset": true, 00:14:53.865 "nvme_admin": false, 00:14:53.865 "nvme_io": false, 00:14:53.865 "nvme_io_md": false, 00:14:53.865 "write_zeroes": true, 00:14:53.865 "zcopy": false, 00:14:53.865 "get_zone_info": false, 00:14:53.865 "zone_management": false, 00:14:53.865 "zone_append": false, 00:14:53.865 "compare": false, 00:14:53.865 "compare_and_write": false, 00:14:53.865 "abort": false, 00:14:53.865 "seek_hole": false, 00:14:53.865 "seek_data": false, 00:14:53.865 "copy": false, 00:14:53.865 "nvme_iov_md": false 00:14:53.865 }, 00:14:53.865 "driver_specific": { 00:14:53.865 "raid": { 00:14:53.865 "uuid": "68dc4750-746b-4d70-a27a-4a7dac72d754", 00:14:53.865 "strip_size_kb": 64, 00:14:53.865 "state": "online", 00:14:53.865 "raid_level": "raid5f", 00:14:53.865 "superblock": true, 00:14:53.865 "num_base_bdevs": 4, 00:14:53.865 "num_base_bdevs_discovered": 4, 00:14:53.865 "num_base_bdevs_operational": 4, 00:14:53.865 "base_bdevs_list": [ 00:14:53.865 { 00:14:53.865 "name": "BaseBdev1", 00:14:53.865 "uuid": "be46089d-ee1c-45e3-bcbe-09344ee78695", 00:14:53.865 "is_configured": true, 00:14:53.865 "data_offset": 2048, 00:14:53.865 "data_size": 63488 00:14:53.865 }, 00:14:53.865 { 00:14:53.865 "name": "BaseBdev2", 00:14:53.865 "uuid": "d0728b86-f11d-4b8e-a1eb-0edb1a2913af", 00:14:53.865 "is_configured": true, 00:14:53.865 "data_offset": 2048, 00:14:53.865 "data_size": 63488 00:14:53.865 }, 00:14:53.865 { 00:14:53.865 "name": "BaseBdev3", 00:14:53.865 "uuid": "bb90c842-d6dd-4714-bd4f-8b434342f6b6", 00:14:53.865 "is_configured": true, 00:14:53.865 "data_offset": 2048, 00:14:53.865 "data_size": 63488 00:14:53.865 }, 00:14:53.865 { 00:14:53.865 "name": "BaseBdev4", 00:14:53.865 "uuid": "f3e67b4f-b9ca-4b84-8b8e-d8a07f2b980d", 00:14:53.865 "is_configured": true, 00:14:53.865 "data_offset": 2048, 00:14:53.865 "data_size": 63488 00:14:53.865 } 00:14:53.865 ] 00:14:53.865 } 00:14:53.865 } 00:14:53.865 }' 00:14:53.865 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:53.865 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:53.865 BaseBdev2 00:14:53.865 BaseBdev3 00:14:53.865 BaseBdev4' 00:14:53.865 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.124 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:54.124 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:54.124 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:54.124 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.124 03:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.124 03:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.124 03:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.124 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:54.124 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:54.124 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:54.124 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.124 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:54.124 03:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.124 03:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.124 03:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.124 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:54.124 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:54.124 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:54.124 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.124 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:54.124 03:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.124 03:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.124 03:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.124 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:54.124 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.125 [2024-11-18 03:14:57.647720] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.125 03:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.384 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.384 "name": "Existed_Raid", 00:14:54.384 "uuid": "68dc4750-746b-4d70-a27a-4a7dac72d754", 00:14:54.384 "strip_size_kb": 64, 00:14:54.384 "state": "online", 00:14:54.384 "raid_level": "raid5f", 00:14:54.384 "superblock": true, 00:14:54.384 "num_base_bdevs": 4, 00:14:54.384 "num_base_bdevs_discovered": 3, 00:14:54.384 "num_base_bdevs_operational": 3, 00:14:54.384 "base_bdevs_list": [ 00:14:54.384 { 00:14:54.384 "name": null, 00:14:54.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.384 "is_configured": false, 00:14:54.384 "data_offset": 0, 00:14:54.384 "data_size": 63488 00:14:54.384 }, 00:14:54.384 { 00:14:54.384 "name": "BaseBdev2", 00:14:54.384 "uuid": "d0728b86-f11d-4b8e-a1eb-0edb1a2913af", 00:14:54.384 "is_configured": true, 00:14:54.384 "data_offset": 2048, 00:14:54.384 "data_size": 63488 00:14:54.384 }, 00:14:54.384 { 00:14:54.384 "name": "BaseBdev3", 00:14:54.384 "uuid": "bb90c842-d6dd-4714-bd4f-8b434342f6b6", 00:14:54.384 "is_configured": true, 00:14:54.384 "data_offset": 2048, 00:14:54.384 "data_size": 63488 00:14:54.384 }, 00:14:54.384 { 00:14:54.384 "name": "BaseBdev4", 00:14:54.384 "uuid": "f3e67b4f-b9ca-4b84-8b8e-d8a07f2b980d", 00:14:54.384 "is_configured": true, 00:14:54.384 "data_offset": 2048, 00:14:54.384 "data_size": 63488 00:14:54.384 } 00:14:54.384 ] 00:14:54.384 }' 00:14:54.384 03:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.384 03:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.644 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:54.644 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:54.644 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:54.644 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.644 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.644 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.644 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.644 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:54.644 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:54.644 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:54.644 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.644 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.644 [2024-11-18 03:14:58.142276] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:54.644 [2024-11-18 03:14:58.142474] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:54.644 [2024-11-18 03:14:58.153669] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:54.644 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.644 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:54.644 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:54.644 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.644 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:54.644 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.644 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.644 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.644 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:54.644 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:54.644 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:54.644 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.644 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.644 [2024-11-18 03:14:58.213606] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.903 [2024-11-18 03:14:58.284798] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:54.903 [2024-11-18 03:14:58.284887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.903 BaseBdev2 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.903 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.903 [ 00:14:54.903 { 00:14:54.903 "name": "BaseBdev2", 00:14:54.903 "aliases": [ 00:14:54.903 "8c91555d-5f06-4ea8-9de6-ca3fbfbbe785" 00:14:54.903 ], 00:14:54.903 "product_name": "Malloc disk", 00:14:54.903 "block_size": 512, 00:14:54.903 "num_blocks": 65536, 00:14:54.903 "uuid": "8c91555d-5f06-4ea8-9de6-ca3fbfbbe785", 00:14:54.903 "assigned_rate_limits": { 00:14:54.903 "rw_ios_per_sec": 0, 00:14:54.903 "rw_mbytes_per_sec": 0, 00:14:54.903 "r_mbytes_per_sec": 0, 00:14:54.903 "w_mbytes_per_sec": 0 00:14:54.903 }, 00:14:54.903 "claimed": false, 00:14:54.903 "zoned": false, 00:14:54.903 "supported_io_types": { 00:14:54.903 "read": true, 00:14:54.903 "write": true, 00:14:54.903 "unmap": true, 00:14:54.903 "flush": true, 00:14:54.903 "reset": true, 00:14:54.903 "nvme_admin": false, 00:14:54.903 "nvme_io": false, 00:14:54.903 "nvme_io_md": false, 00:14:54.903 "write_zeroes": true, 00:14:54.903 "zcopy": true, 00:14:54.903 "get_zone_info": false, 00:14:54.903 "zone_management": false, 00:14:54.904 "zone_append": false, 00:14:54.904 "compare": false, 00:14:54.904 "compare_and_write": false, 00:14:54.904 "abort": true, 00:14:54.904 "seek_hole": false, 00:14:54.904 "seek_data": false, 00:14:54.904 "copy": true, 00:14:54.904 "nvme_iov_md": false 00:14:54.904 }, 00:14:54.904 "memory_domains": [ 00:14:54.904 { 00:14:54.904 "dma_device_id": "system", 00:14:54.904 "dma_device_type": 1 00:14:54.904 }, 00:14:54.904 { 00:14:54.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.904 "dma_device_type": 2 00:14:54.904 } 00:14:54.904 ], 00:14:54.904 "driver_specific": {} 00:14:54.904 } 00:14:54.904 ] 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.904 BaseBdev3 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.904 [ 00:14:54.904 { 00:14:54.904 "name": "BaseBdev3", 00:14:54.904 "aliases": [ 00:14:54.904 "c635ec08-8154-4879-af30-8395f1e7fd1c" 00:14:54.904 ], 00:14:54.904 "product_name": "Malloc disk", 00:14:54.904 "block_size": 512, 00:14:54.904 "num_blocks": 65536, 00:14:54.904 "uuid": "c635ec08-8154-4879-af30-8395f1e7fd1c", 00:14:54.904 "assigned_rate_limits": { 00:14:54.904 "rw_ios_per_sec": 0, 00:14:54.904 "rw_mbytes_per_sec": 0, 00:14:54.904 "r_mbytes_per_sec": 0, 00:14:54.904 "w_mbytes_per_sec": 0 00:14:54.904 }, 00:14:54.904 "claimed": false, 00:14:54.904 "zoned": false, 00:14:54.904 "supported_io_types": { 00:14:54.904 "read": true, 00:14:54.904 "write": true, 00:14:54.904 "unmap": true, 00:14:54.904 "flush": true, 00:14:54.904 "reset": true, 00:14:54.904 "nvme_admin": false, 00:14:54.904 "nvme_io": false, 00:14:54.904 "nvme_io_md": false, 00:14:54.904 "write_zeroes": true, 00:14:54.904 "zcopy": true, 00:14:54.904 "get_zone_info": false, 00:14:54.904 "zone_management": false, 00:14:54.904 "zone_append": false, 00:14:54.904 "compare": false, 00:14:54.904 "compare_and_write": false, 00:14:54.904 "abort": true, 00:14:54.904 "seek_hole": false, 00:14:54.904 "seek_data": false, 00:14:54.904 "copy": true, 00:14:54.904 "nvme_iov_md": false 00:14:54.904 }, 00:14:54.904 "memory_domains": [ 00:14:54.904 { 00:14:54.904 "dma_device_id": "system", 00:14:54.904 "dma_device_type": 1 00:14:54.904 }, 00:14:54.904 { 00:14:54.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.904 "dma_device_type": 2 00:14:54.904 } 00:14:54.904 ], 00:14:54.904 "driver_specific": {} 00:14:54.904 } 00:14:54.904 ] 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.904 BaseBdev4 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.904 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.904 [ 00:14:54.904 { 00:14:54.904 "name": "BaseBdev4", 00:14:54.904 "aliases": [ 00:14:54.904 "3c6940e9-6512-40e8-812a-1eba080e546b" 00:14:54.904 ], 00:14:54.904 "product_name": "Malloc disk", 00:14:54.904 "block_size": 512, 00:14:54.904 "num_blocks": 65536, 00:14:54.904 "uuid": "3c6940e9-6512-40e8-812a-1eba080e546b", 00:14:54.904 "assigned_rate_limits": { 00:14:54.904 "rw_ios_per_sec": 0, 00:14:54.904 "rw_mbytes_per_sec": 0, 00:14:54.904 "r_mbytes_per_sec": 0, 00:14:54.904 "w_mbytes_per_sec": 0 00:14:54.904 }, 00:14:54.904 "claimed": false, 00:14:54.904 "zoned": false, 00:14:54.904 "supported_io_types": { 00:14:54.904 "read": true, 00:14:54.904 "write": true, 00:14:54.904 "unmap": true, 00:14:54.904 "flush": true, 00:14:54.904 "reset": true, 00:14:54.904 "nvme_admin": false, 00:14:54.904 "nvme_io": false, 00:14:54.904 "nvme_io_md": false, 00:14:54.904 "write_zeroes": true, 00:14:54.904 "zcopy": true, 00:14:54.904 "get_zone_info": false, 00:14:54.904 "zone_management": false, 00:14:54.904 "zone_append": false, 00:14:54.904 "compare": false, 00:14:54.904 "compare_and_write": false, 00:14:54.904 "abort": true, 00:14:54.904 "seek_hole": false, 00:14:54.904 "seek_data": false, 00:14:54.904 "copy": true, 00:14:54.904 "nvme_iov_md": false 00:14:54.904 }, 00:14:54.904 "memory_domains": [ 00:14:54.904 { 00:14:54.905 "dma_device_id": "system", 00:14:54.905 "dma_device_type": 1 00:14:54.905 }, 00:14:54.905 { 00:14:54.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.905 "dma_device_type": 2 00:14:54.905 } 00:14:54.905 ], 00:14:54.905 "driver_specific": {} 00:14:54.905 } 00:14:54.905 ] 00:14:54.905 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.905 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:54.905 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:54.905 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:54.905 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:54.905 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.905 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.164 [2024-11-18 03:14:58.477633] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:55.164 [2024-11-18 03:14:58.477721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:55.164 [2024-11-18 03:14:58.477761] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:55.164 [2024-11-18 03:14:58.479582] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:55.164 [2024-11-18 03:14:58.479668] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:55.164 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.164 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:55.164 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.164 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.164 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.164 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.164 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:55.164 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.164 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.164 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.164 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.164 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.164 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.164 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.164 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.164 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.164 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.164 "name": "Existed_Raid", 00:14:55.164 "uuid": "54024285-7084-40d5-84ca-3b3ba7cecb8f", 00:14:55.164 "strip_size_kb": 64, 00:14:55.164 "state": "configuring", 00:14:55.164 "raid_level": "raid5f", 00:14:55.164 "superblock": true, 00:14:55.164 "num_base_bdevs": 4, 00:14:55.164 "num_base_bdevs_discovered": 3, 00:14:55.164 "num_base_bdevs_operational": 4, 00:14:55.164 "base_bdevs_list": [ 00:14:55.164 { 00:14:55.164 "name": "BaseBdev1", 00:14:55.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.164 "is_configured": false, 00:14:55.164 "data_offset": 0, 00:14:55.164 "data_size": 0 00:14:55.164 }, 00:14:55.164 { 00:14:55.164 "name": "BaseBdev2", 00:14:55.164 "uuid": "8c91555d-5f06-4ea8-9de6-ca3fbfbbe785", 00:14:55.164 "is_configured": true, 00:14:55.164 "data_offset": 2048, 00:14:55.164 "data_size": 63488 00:14:55.164 }, 00:14:55.164 { 00:14:55.165 "name": "BaseBdev3", 00:14:55.165 "uuid": "c635ec08-8154-4879-af30-8395f1e7fd1c", 00:14:55.165 "is_configured": true, 00:14:55.165 "data_offset": 2048, 00:14:55.165 "data_size": 63488 00:14:55.165 }, 00:14:55.165 { 00:14:55.165 "name": "BaseBdev4", 00:14:55.165 "uuid": "3c6940e9-6512-40e8-812a-1eba080e546b", 00:14:55.165 "is_configured": true, 00:14:55.165 "data_offset": 2048, 00:14:55.165 "data_size": 63488 00:14:55.165 } 00:14:55.165 ] 00:14:55.165 }' 00:14:55.165 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.165 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.424 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:55.424 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.424 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.424 [2024-11-18 03:14:58.932855] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:55.424 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.424 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:55.424 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.424 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.424 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.424 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.424 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:55.424 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.424 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.424 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.424 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.424 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.424 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.424 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.424 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.424 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.424 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.424 "name": "Existed_Raid", 00:14:55.424 "uuid": "54024285-7084-40d5-84ca-3b3ba7cecb8f", 00:14:55.424 "strip_size_kb": 64, 00:14:55.424 "state": "configuring", 00:14:55.424 "raid_level": "raid5f", 00:14:55.424 "superblock": true, 00:14:55.424 "num_base_bdevs": 4, 00:14:55.424 "num_base_bdevs_discovered": 2, 00:14:55.424 "num_base_bdevs_operational": 4, 00:14:55.424 "base_bdevs_list": [ 00:14:55.424 { 00:14:55.424 "name": "BaseBdev1", 00:14:55.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.424 "is_configured": false, 00:14:55.424 "data_offset": 0, 00:14:55.424 "data_size": 0 00:14:55.424 }, 00:14:55.424 { 00:14:55.424 "name": null, 00:14:55.424 "uuid": "8c91555d-5f06-4ea8-9de6-ca3fbfbbe785", 00:14:55.424 "is_configured": false, 00:14:55.424 "data_offset": 0, 00:14:55.424 "data_size": 63488 00:14:55.424 }, 00:14:55.424 { 00:14:55.424 "name": "BaseBdev3", 00:14:55.424 "uuid": "c635ec08-8154-4879-af30-8395f1e7fd1c", 00:14:55.424 "is_configured": true, 00:14:55.424 "data_offset": 2048, 00:14:55.424 "data_size": 63488 00:14:55.424 }, 00:14:55.424 { 00:14:55.424 "name": "BaseBdev4", 00:14:55.424 "uuid": "3c6940e9-6512-40e8-812a-1eba080e546b", 00:14:55.424 "is_configured": true, 00:14:55.424 "data_offset": 2048, 00:14:55.424 "data_size": 63488 00:14:55.424 } 00:14:55.424 ] 00:14:55.424 }' 00:14:55.424 03:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.424 03:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.993 [2024-11-18 03:14:59.434954] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.993 BaseBdev1 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.993 [ 00:14:55.993 { 00:14:55.993 "name": "BaseBdev1", 00:14:55.993 "aliases": [ 00:14:55.993 "eff23d51-f4bd-48c1-818d-f022c3008dbd" 00:14:55.993 ], 00:14:55.993 "product_name": "Malloc disk", 00:14:55.993 "block_size": 512, 00:14:55.993 "num_blocks": 65536, 00:14:55.993 "uuid": "eff23d51-f4bd-48c1-818d-f022c3008dbd", 00:14:55.993 "assigned_rate_limits": { 00:14:55.993 "rw_ios_per_sec": 0, 00:14:55.993 "rw_mbytes_per_sec": 0, 00:14:55.993 "r_mbytes_per_sec": 0, 00:14:55.993 "w_mbytes_per_sec": 0 00:14:55.993 }, 00:14:55.993 "claimed": true, 00:14:55.993 "claim_type": "exclusive_write", 00:14:55.993 "zoned": false, 00:14:55.993 "supported_io_types": { 00:14:55.993 "read": true, 00:14:55.993 "write": true, 00:14:55.993 "unmap": true, 00:14:55.993 "flush": true, 00:14:55.993 "reset": true, 00:14:55.993 "nvme_admin": false, 00:14:55.993 "nvme_io": false, 00:14:55.993 "nvme_io_md": false, 00:14:55.993 "write_zeroes": true, 00:14:55.993 "zcopy": true, 00:14:55.993 "get_zone_info": false, 00:14:55.993 "zone_management": false, 00:14:55.993 "zone_append": false, 00:14:55.993 "compare": false, 00:14:55.993 "compare_and_write": false, 00:14:55.993 "abort": true, 00:14:55.993 "seek_hole": false, 00:14:55.993 "seek_data": false, 00:14:55.993 "copy": true, 00:14:55.993 "nvme_iov_md": false 00:14:55.993 }, 00:14:55.993 "memory_domains": [ 00:14:55.993 { 00:14:55.993 "dma_device_id": "system", 00:14:55.993 "dma_device_type": 1 00:14:55.993 }, 00:14:55.993 { 00:14:55.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.993 "dma_device_type": 2 00:14:55.993 } 00:14:55.993 ], 00:14:55.993 "driver_specific": {} 00:14:55.993 } 00:14:55.993 ] 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.993 "name": "Existed_Raid", 00:14:55.993 "uuid": "54024285-7084-40d5-84ca-3b3ba7cecb8f", 00:14:55.993 "strip_size_kb": 64, 00:14:55.993 "state": "configuring", 00:14:55.993 "raid_level": "raid5f", 00:14:55.993 "superblock": true, 00:14:55.993 "num_base_bdevs": 4, 00:14:55.993 "num_base_bdevs_discovered": 3, 00:14:55.993 "num_base_bdevs_operational": 4, 00:14:55.993 "base_bdevs_list": [ 00:14:55.993 { 00:14:55.993 "name": "BaseBdev1", 00:14:55.993 "uuid": "eff23d51-f4bd-48c1-818d-f022c3008dbd", 00:14:55.993 "is_configured": true, 00:14:55.993 "data_offset": 2048, 00:14:55.993 "data_size": 63488 00:14:55.993 }, 00:14:55.993 { 00:14:55.993 "name": null, 00:14:55.993 "uuid": "8c91555d-5f06-4ea8-9de6-ca3fbfbbe785", 00:14:55.993 "is_configured": false, 00:14:55.993 "data_offset": 0, 00:14:55.993 "data_size": 63488 00:14:55.993 }, 00:14:55.993 { 00:14:55.993 "name": "BaseBdev3", 00:14:55.993 "uuid": "c635ec08-8154-4879-af30-8395f1e7fd1c", 00:14:55.993 "is_configured": true, 00:14:55.993 "data_offset": 2048, 00:14:55.993 "data_size": 63488 00:14:55.993 }, 00:14:55.993 { 00:14:55.993 "name": "BaseBdev4", 00:14:55.993 "uuid": "3c6940e9-6512-40e8-812a-1eba080e546b", 00:14:55.993 "is_configured": true, 00:14:55.993 "data_offset": 2048, 00:14:55.993 "data_size": 63488 00:14:55.993 } 00:14:55.993 ] 00:14:55.993 }' 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.993 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.563 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.563 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.563 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:56.563 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.563 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.563 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:56.563 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:56.563 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.563 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.563 [2024-11-18 03:14:59.950141] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:56.563 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.563 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:56.563 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.563 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.563 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.563 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.563 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.563 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.563 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.563 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.563 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.563 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.563 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.563 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.563 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.563 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.563 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.563 "name": "Existed_Raid", 00:14:56.563 "uuid": "54024285-7084-40d5-84ca-3b3ba7cecb8f", 00:14:56.563 "strip_size_kb": 64, 00:14:56.563 "state": "configuring", 00:14:56.563 "raid_level": "raid5f", 00:14:56.563 "superblock": true, 00:14:56.563 "num_base_bdevs": 4, 00:14:56.563 "num_base_bdevs_discovered": 2, 00:14:56.563 "num_base_bdevs_operational": 4, 00:14:56.563 "base_bdevs_list": [ 00:14:56.563 { 00:14:56.563 "name": "BaseBdev1", 00:14:56.563 "uuid": "eff23d51-f4bd-48c1-818d-f022c3008dbd", 00:14:56.563 "is_configured": true, 00:14:56.563 "data_offset": 2048, 00:14:56.563 "data_size": 63488 00:14:56.563 }, 00:14:56.563 { 00:14:56.563 "name": null, 00:14:56.563 "uuid": "8c91555d-5f06-4ea8-9de6-ca3fbfbbe785", 00:14:56.563 "is_configured": false, 00:14:56.563 "data_offset": 0, 00:14:56.563 "data_size": 63488 00:14:56.563 }, 00:14:56.563 { 00:14:56.563 "name": null, 00:14:56.563 "uuid": "c635ec08-8154-4879-af30-8395f1e7fd1c", 00:14:56.563 "is_configured": false, 00:14:56.563 "data_offset": 0, 00:14:56.563 "data_size": 63488 00:14:56.563 }, 00:14:56.563 { 00:14:56.563 "name": "BaseBdev4", 00:14:56.563 "uuid": "3c6940e9-6512-40e8-812a-1eba080e546b", 00:14:56.563 "is_configured": true, 00:14:56.563 "data_offset": 2048, 00:14:56.563 "data_size": 63488 00:14:56.563 } 00:14:56.563 ] 00:14:56.563 }' 00:14:56.563 03:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.563 03:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.824 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.824 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:56.824 03:15:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.824 03:15:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.824 03:15:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.824 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:56.824 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:56.824 03:15:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.824 03:15:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.085 [2024-11-18 03:15:00.401472] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:57.085 03:15:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.085 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:57.085 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.085 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.085 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.085 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.085 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:57.085 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.085 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.085 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.085 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.085 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.085 03:15:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.085 03:15:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.085 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.085 03:15:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.085 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.085 "name": "Existed_Raid", 00:14:57.085 "uuid": "54024285-7084-40d5-84ca-3b3ba7cecb8f", 00:14:57.085 "strip_size_kb": 64, 00:14:57.085 "state": "configuring", 00:14:57.085 "raid_level": "raid5f", 00:14:57.085 "superblock": true, 00:14:57.085 "num_base_bdevs": 4, 00:14:57.085 "num_base_bdevs_discovered": 3, 00:14:57.085 "num_base_bdevs_operational": 4, 00:14:57.085 "base_bdevs_list": [ 00:14:57.085 { 00:14:57.085 "name": "BaseBdev1", 00:14:57.085 "uuid": "eff23d51-f4bd-48c1-818d-f022c3008dbd", 00:14:57.085 "is_configured": true, 00:14:57.085 "data_offset": 2048, 00:14:57.085 "data_size": 63488 00:14:57.085 }, 00:14:57.085 { 00:14:57.085 "name": null, 00:14:57.085 "uuid": "8c91555d-5f06-4ea8-9de6-ca3fbfbbe785", 00:14:57.085 "is_configured": false, 00:14:57.085 "data_offset": 0, 00:14:57.085 "data_size": 63488 00:14:57.085 }, 00:14:57.085 { 00:14:57.085 "name": "BaseBdev3", 00:14:57.085 "uuid": "c635ec08-8154-4879-af30-8395f1e7fd1c", 00:14:57.085 "is_configured": true, 00:14:57.085 "data_offset": 2048, 00:14:57.085 "data_size": 63488 00:14:57.085 }, 00:14:57.085 { 00:14:57.085 "name": "BaseBdev4", 00:14:57.085 "uuid": "3c6940e9-6512-40e8-812a-1eba080e546b", 00:14:57.085 "is_configured": true, 00:14:57.085 "data_offset": 2048, 00:14:57.085 "data_size": 63488 00:14:57.085 } 00:14:57.085 ] 00:14:57.085 }' 00:14:57.085 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.085 03:15:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.345 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.345 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:57.345 03:15:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.345 03:15:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.345 03:15:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.346 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:57.346 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:57.346 03:15:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.346 03:15:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.346 [2024-11-18 03:15:00.860678] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:57.346 03:15:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.346 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:57.346 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.346 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.346 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.346 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.346 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:57.346 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.346 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.346 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.346 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.346 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.346 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.346 03:15:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.346 03:15:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.346 03:15:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.606 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.606 "name": "Existed_Raid", 00:14:57.606 "uuid": "54024285-7084-40d5-84ca-3b3ba7cecb8f", 00:14:57.606 "strip_size_kb": 64, 00:14:57.606 "state": "configuring", 00:14:57.606 "raid_level": "raid5f", 00:14:57.606 "superblock": true, 00:14:57.606 "num_base_bdevs": 4, 00:14:57.606 "num_base_bdevs_discovered": 2, 00:14:57.606 "num_base_bdevs_operational": 4, 00:14:57.606 "base_bdevs_list": [ 00:14:57.606 { 00:14:57.606 "name": null, 00:14:57.606 "uuid": "eff23d51-f4bd-48c1-818d-f022c3008dbd", 00:14:57.606 "is_configured": false, 00:14:57.606 "data_offset": 0, 00:14:57.606 "data_size": 63488 00:14:57.606 }, 00:14:57.606 { 00:14:57.606 "name": null, 00:14:57.606 "uuid": "8c91555d-5f06-4ea8-9de6-ca3fbfbbe785", 00:14:57.606 "is_configured": false, 00:14:57.606 "data_offset": 0, 00:14:57.606 "data_size": 63488 00:14:57.606 }, 00:14:57.606 { 00:14:57.606 "name": "BaseBdev3", 00:14:57.606 "uuid": "c635ec08-8154-4879-af30-8395f1e7fd1c", 00:14:57.606 "is_configured": true, 00:14:57.606 "data_offset": 2048, 00:14:57.606 "data_size": 63488 00:14:57.606 }, 00:14:57.606 { 00:14:57.606 "name": "BaseBdev4", 00:14:57.606 "uuid": "3c6940e9-6512-40e8-812a-1eba080e546b", 00:14:57.607 "is_configured": true, 00:14:57.607 "data_offset": 2048, 00:14:57.607 "data_size": 63488 00:14:57.607 } 00:14:57.607 ] 00:14:57.607 }' 00:14:57.607 03:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.607 03:15:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.867 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.867 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:57.867 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.867 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.867 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.867 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:57.867 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:57.867 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.867 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.867 [2024-11-18 03:15:01.326389] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:57.867 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.867 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:57.867 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.867 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.867 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.867 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.867 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:57.867 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.867 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.867 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.867 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.867 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.867 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.867 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.867 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.867 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.867 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.867 "name": "Existed_Raid", 00:14:57.867 "uuid": "54024285-7084-40d5-84ca-3b3ba7cecb8f", 00:14:57.867 "strip_size_kb": 64, 00:14:57.867 "state": "configuring", 00:14:57.867 "raid_level": "raid5f", 00:14:57.867 "superblock": true, 00:14:57.867 "num_base_bdevs": 4, 00:14:57.867 "num_base_bdevs_discovered": 3, 00:14:57.867 "num_base_bdevs_operational": 4, 00:14:57.867 "base_bdevs_list": [ 00:14:57.867 { 00:14:57.867 "name": null, 00:14:57.867 "uuid": "eff23d51-f4bd-48c1-818d-f022c3008dbd", 00:14:57.867 "is_configured": false, 00:14:57.867 "data_offset": 0, 00:14:57.867 "data_size": 63488 00:14:57.867 }, 00:14:57.867 { 00:14:57.867 "name": "BaseBdev2", 00:14:57.867 "uuid": "8c91555d-5f06-4ea8-9de6-ca3fbfbbe785", 00:14:57.867 "is_configured": true, 00:14:57.867 "data_offset": 2048, 00:14:57.867 "data_size": 63488 00:14:57.867 }, 00:14:57.867 { 00:14:57.867 "name": "BaseBdev3", 00:14:57.867 "uuid": "c635ec08-8154-4879-af30-8395f1e7fd1c", 00:14:57.867 "is_configured": true, 00:14:57.867 "data_offset": 2048, 00:14:57.867 "data_size": 63488 00:14:57.867 }, 00:14:57.867 { 00:14:57.867 "name": "BaseBdev4", 00:14:57.867 "uuid": "3c6940e9-6512-40e8-812a-1eba080e546b", 00:14:57.867 "is_configured": true, 00:14:57.867 "data_offset": 2048, 00:14:57.867 "data_size": 63488 00:14:57.867 } 00:14:57.867 ] 00:14:57.867 }' 00:14:57.867 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.867 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u eff23d51-f4bd-48c1-818d-f022c3008dbd 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.437 [2024-11-18 03:15:01.848445] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:58.437 [2024-11-18 03:15:01.848715] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:58.437 [2024-11-18 03:15:01.848750] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:58.437 [2024-11-18 03:15:01.849037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:58.437 NewBaseBdev 00:14:58.437 [2024-11-18 03:15:01.849504] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:58.437 [2024-11-18 03:15:01.849524] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:14:58.437 [2024-11-18 03:15:01.849619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.437 [ 00:14:58.437 { 00:14:58.437 "name": "NewBaseBdev", 00:14:58.437 "aliases": [ 00:14:58.437 "eff23d51-f4bd-48c1-818d-f022c3008dbd" 00:14:58.437 ], 00:14:58.437 "product_name": "Malloc disk", 00:14:58.437 "block_size": 512, 00:14:58.437 "num_blocks": 65536, 00:14:58.437 "uuid": "eff23d51-f4bd-48c1-818d-f022c3008dbd", 00:14:58.437 "assigned_rate_limits": { 00:14:58.437 "rw_ios_per_sec": 0, 00:14:58.437 "rw_mbytes_per_sec": 0, 00:14:58.437 "r_mbytes_per_sec": 0, 00:14:58.437 "w_mbytes_per_sec": 0 00:14:58.437 }, 00:14:58.437 "claimed": true, 00:14:58.437 "claim_type": "exclusive_write", 00:14:58.437 "zoned": false, 00:14:58.437 "supported_io_types": { 00:14:58.437 "read": true, 00:14:58.437 "write": true, 00:14:58.437 "unmap": true, 00:14:58.437 "flush": true, 00:14:58.437 "reset": true, 00:14:58.437 "nvme_admin": false, 00:14:58.437 "nvme_io": false, 00:14:58.437 "nvme_io_md": false, 00:14:58.437 "write_zeroes": true, 00:14:58.437 "zcopy": true, 00:14:58.437 "get_zone_info": false, 00:14:58.437 "zone_management": false, 00:14:58.437 "zone_append": false, 00:14:58.437 "compare": false, 00:14:58.437 "compare_and_write": false, 00:14:58.437 "abort": true, 00:14:58.437 "seek_hole": false, 00:14:58.437 "seek_data": false, 00:14:58.437 "copy": true, 00:14:58.437 "nvme_iov_md": false 00:14:58.437 }, 00:14:58.437 "memory_domains": [ 00:14:58.437 { 00:14:58.437 "dma_device_id": "system", 00:14:58.437 "dma_device_type": 1 00:14:58.437 }, 00:14:58.437 { 00:14:58.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.437 "dma_device_type": 2 00:14:58.437 } 00:14:58.437 ], 00:14:58.437 "driver_specific": {} 00:14:58.437 } 00:14:58.437 ] 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.437 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.438 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:58.438 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.438 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.438 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.438 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.438 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.438 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.438 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.438 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.438 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.438 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.438 "name": "Existed_Raid", 00:14:58.438 "uuid": "54024285-7084-40d5-84ca-3b3ba7cecb8f", 00:14:58.438 "strip_size_kb": 64, 00:14:58.438 "state": "online", 00:14:58.438 "raid_level": "raid5f", 00:14:58.438 "superblock": true, 00:14:58.438 "num_base_bdevs": 4, 00:14:58.438 "num_base_bdevs_discovered": 4, 00:14:58.438 "num_base_bdevs_operational": 4, 00:14:58.438 "base_bdevs_list": [ 00:14:58.438 { 00:14:58.438 "name": "NewBaseBdev", 00:14:58.438 "uuid": "eff23d51-f4bd-48c1-818d-f022c3008dbd", 00:14:58.438 "is_configured": true, 00:14:58.438 "data_offset": 2048, 00:14:58.438 "data_size": 63488 00:14:58.438 }, 00:14:58.438 { 00:14:58.438 "name": "BaseBdev2", 00:14:58.438 "uuid": "8c91555d-5f06-4ea8-9de6-ca3fbfbbe785", 00:14:58.438 "is_configured": true, 00:14:58.438 "data_offset": 2048, 00:14:58.438 "data_size": 63488 00:14:58.438 }, 00:14:58.438 { 00:14:58.438 "name": "BaseBdev3", 00:14:58.438 "uuid": "c635ec08-8154-4879-af30-8395f1e7fd1c", 00:14:58.438 "is_configured": true, 00:14:58.438 "data_offset": 2048, 00:14:58.438 "data_size": 63488 00:14:58.438 }, 00:14:58.438 { 00:14:58.438 "name": "BaseBdev4", 00:14:58.438 "uuid": "3c6940e9-6512-40e8-812a-1eba080e546b", 00:14:58.438 "is_configured": true, 00:14:58.438 "data_offset": 2048, 00:14:58.438 "data_size": 63488 00:14:58.438 } 00:14:58.438 ] 00:14:58.438 }' 00:14:58.438 03:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.438 03:15:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.010 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:59.010 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:59.010 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:59.010 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:59.010 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:59.010 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:59.010 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:59.010 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:59.010 03:15:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.010 03:15:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.010 [2024-11-18 03:15:02.288005] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.010 03:15:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.010 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:59.010 "name": "Existed_Raid", 00:14:59.010 "aliases": [ 00:14:59.010 "54024285-7084-40d5-84ca-3b3ba7cecb8f" 00:14:59.010 ], 00:14:59.010 "product_name": "Raid Volume", 00:14:59.010 "block_size": 512, 00:14:59.010 "num_blocks": 190464, 00:14:59.010 "uuid": "54024285-7084-40d5-84ca-3b3ba7cecb8f", 00:14:59.010 "assigned_rate_limits": { 00:14:59.010 "rw_ios_per_sec": 0, 00:14:59.010 "rw_mbytes_per_sec": 0, 00:14:59.010 "r_mbytes_per_sec": 0, 00:14:59.010 "w_mbytes_per_sec": 0 00:14:59.010 }, 00:14:59.010 "claimed": false, 00:14:59.010 "zoned": false, 00:14:59.010 "supported_io_types": { 00:14:59.010 "read": true, 00:14:59.010 "write": true, 00:14:59.010 "unmap": false, 00:14:59.010 "flush": false, 00:14:59.010 "reset": true, 00:14:59.010 "nvme_admin": false, 00:14:59.010 "nvme_io": false, 00:14:59.010 "nvme_io_md": false, 00:14:59.010 "write_zeroes": true, 00:14:59.010 "zcopy": false, 00:14:59.010 "get_zone_info": false, 00:14:59.010 "zone_management": false, 00:14:59.010 "zone_append": false, 00:14:59.010 "compare": false, 00:14:59.010 "compare_and_write": false, 00:14:59.010 "abort": false, 00:14:59.010 "seek_hole": false, 00:14:59.010 "seek_data": false, 00:14:59.010 "copy": false, 00:14:59.010 "nvme_iov_md": false 00:14:59.010 }, 00:14:59.010 "driver_specific": { 00:14:59.010 "raid": { 00:14:59.010 "uuid": "54024285-7084-40d5-84ca-3b3ba7cecb8f", 00:14:59.010 "strip_size_kb": 64, 00:14:59.010 "state": "online", 00:14:59.010 "raid_level": "raid5f", 00:14:59.010 "superblock": true, 00:14:59.010 "num_base_bdevs": 4, 00:14:59.010 "num_base_bdevs_discovered": 4, 00:14:59.010 "num_base_bdevs_operational": 4, 00:14:59.010 "base_bdevs_list": [ 00:14:59.010 { 00:14:59.010 "name": "NewBaseBdev", 00:14:59.010 "uuid": "eff23d51-f4bd-48c1-818d-f022c3008dbd", 00:14:59.010 "is_configured": true, 00:14:59.010 "data_offset": 2048, 00:14:59.010 "data_size": 63488 00:14:59.010 }, 00:14:59.010 { 00:14:59.010 "name": "BaseBdev2", 00:14:59.010 "uuid": "8c91555d-5f06-4ea8-9de6-ca3fbfbbe785", 00:14:59.010 "is_configured": true, 00:14:59.010 "data_offset": 2048, 00:14:59.010 "data_size": 63488 00:14:59.010 }, 00:14:59.010 { 00:14:59.010 "name": "BaseBdev3", 00:14:59.010 "uuid": "c635ec08-8154-4879-af30-8395f1e7fd1c", 00:14:59.010 "is_configured": true, 00:14:59.010 "data_offset": 2048, 00:14:59.010 "data_size": 63488 00:14:59.010 }, 00:14:59.010 { 00:14:59.010 "name": "BaseBdev4", 00:14:59.010 "uuid": "3c6940e9-6512-40e8-812a-1eba080e546b", 00:14:59.010 "is_configured": true, 00:14:59.010 "data_offset": 2048, 00:14:59.010 "data_size": 63488 00:14:59.010 } 00:14:59.010 ] 00:14:59.010 } 00:14:59.010 } 00:14:59.010 }' 00:14:59.010 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:59.010 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:59.010 BaseBdev2 00:14:59.010 BaseBdev3 00:14:59.010 BaseBdev4' 00:14:59.010 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.010 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:59.010 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.010 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.010 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:59.010 03:15:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.010 03:15:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.010 03:15:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.010 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.010 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.010 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.010 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.010 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:59.010 03:15:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.011 03:15:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.011 03:15:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.011 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.011 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.011 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.011 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.011 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:59.011 03:15:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.011 03:15:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.011 03:15:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.011 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.011 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.011 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.011 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.011 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:59.011 03:15:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.011 03:15:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.011 03:15:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.011 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.011 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.011 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:59.011 03:15:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.011 03:15:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.011 [2024-11-18 03:15:02.559249] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:59.011 [2024-11-18 03:15:02.559320] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:59.011 [2024-11-18 03:15:02.559430] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:59.011 [2024-11-18 03:15:02.559725] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:59.011 [2024-11-18 03:15:02.559780] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:14:59.011 03:15:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.011 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 94025 00:14:59.011 03:15:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 94025 ']' 00:14:59.011 03:15:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 94025 00:14:59.011 03:15:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:59.011 03:15:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:59.011 03:15:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94025 00:14:59.271 03:15:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:59.271 03:15:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:59.271 killing process with pid 94025 00:14:59.271 03:15:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94025' 00:14:59.271 03:15:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 94025 00:14:59.271 [2024-11-18 03:15:02.589386] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:59.271 03:15:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 94025 00:14:59.271 [2024-11-18 03:15:02.630687] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:59.531 ************************************ 00:14:59.532 END TEST raid5f_state_function_test_sb 00:14:59.532 ************************************ 00:14:59.532 03:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:59.532 00:14:59.532 real 0m9.313s 00:14:59.532 user 0m15.878s 00:14:59.532 sys 0m2.063s 00:14:59.532 03:15:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:59.532 03:15:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.532 03:15:02 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:14:59.532 03:15:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:59.532 03:15:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:59.532 03:15:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:59.532 ************************************ 00:14:59.532 START TEST raid5f_superblock_test 00:14:59.532 ************************************ 00:14:59.532 03:15:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:14:59.532 03:15:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:59.532 03:15:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:59.532 03:15:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:59.532 03:15:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:59.532 03:15:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:59.532 03:15:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:59.532 03:15:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:59.532 03:15:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:59.532 03:15:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:59.532 03:15:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:59.532 03:15:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:59.532 03:15:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:59.532 03:15:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:59.532 03:15:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:59.532 03:15:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:59.532 03:15:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:59.532 03:15:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=94668 00:14:59.532 03:15:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:59.532 03:15:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 94668 00:14:59.532 03:15:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 94668 ']' 00:14:59.532 03:15:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.532 03:15:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:59.532 03:15:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.532 03:15:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:59.532 03:15:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.532 [2024-11-18 03:15:03.023050] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:59.532 [2024-11-18 03:15:03.023263] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94668 ] 00:14:59.792 [2024-11-18 03:15:03.184296] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.792 [2024-11-18 03:15:03.234408] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.792 [2024-11-18 03:15:03.276497] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.792 [2024-11-18 03:15:03.276611] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.362 malloc1 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.362 [2024-11-18 03:15:03.899016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:00.362 [2024-11-18 03:15:03.899152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.362 [2024-11-18 03:15:03.899195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:00.362 [2024-11-18 03:15:03.899245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.362 [2024-11-18 03:15:03.901409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.362 [2024-11-18 03:15:03.901447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:00.362 pt1 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.362 malloc2 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.362 03:15:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.622 [2024-11-18 03:15:03.938783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:00.622 [2024-11-18 03:15:03.938906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.622 [2024-11-18 03:15:03.938948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:00.622 [2024-11-18 03:15:03.939009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.622 [2024-11-18 03:15:03.941313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.622 [2024-11-18 03:15:03.941381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:00.622 pt2 00:15:00.622 03:15:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.622 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:00.622 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:00.623 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:00.623 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:00.623 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:00.623 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:00.623 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:00.623 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:00.623 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:00.623 03:15:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.623 03:15:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.623 malloc3 00:15:00.623 03:15:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.623 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:00.623 03:15:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.623 03:15:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.623 [2024-11-18 03:15:03.971406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:00.623 [2024-11-18 03:15:03.971509] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.623 [2024-11-18 03:15:03.971561] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:00.623 [2024-11-18 03:15:03.971592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.623 [2024-11-18 03:15:03.973727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.623 [2024-11-18 03:15:03.973798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:00.623 pt3 00:15:00.623 03:15:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.623 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:00.623 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:00.623 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:00.623 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:00.623 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:00.623 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:00.623 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:00.623 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:00.623 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:00.623 03:15:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.623 03:15:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.623 malloc4 00:15:00.623 03:15:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.623 03:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:00.623 03:15:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.623 03:15:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.623 [2024-11-18 03:15:04.004013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:00.623 [2024-11-18 03:15:04.004110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.623 [2024-11-18 03:15:04.004144] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:00.623 [2024-11-18 03:15:04.004175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.623 [2024-11-18 03:15:04.006228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.623 [2024-11-18 03:15:04.006299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:00.623 pt4 00:15:00.623 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.623 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:00.623 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:00.623 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:00.623 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.623 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.623 [2024-11-18 03:15:04.016074] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:00.623 [2024-11-18 03:15:04.017875] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:00.623 [2024-11-18 03:15:04.017976] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:00.623 [2024-11-18 03:15:04.018058] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:00.623 [2024-11-18 03:15:04.018250] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:00.623 [2024-11-18 03:15:04.018294] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:00.623 [2024-11-18 03:15:04.018563] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:00.623 [2024-11-18 03:15:04.019070] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:00.623 [2024-11-18 03:15:04.019118] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:00.623 [2024-11-18 03:15:04.019287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.623 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.623 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:00.623 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.623 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.623 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.623 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.623 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:00.623 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.623 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.623 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.623 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.623 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.623 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.623 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.623 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.623 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.623 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.623 "name": "raid_bdev1", 00:15:00.623 "uuid": "33f5c9fd-192f-4546-845c-4e4cdc1fd2b7", 00:15:00.623 "strip_size_kb": 64, 00:15:00.623 "state": "online", 00:15:00.624 "raid_level": "raid5f", 00:15:00.624 "superblock": true, 00:15:00.624 "num_base_bdevs": 4, 00:15:00.624 "num_base_bdevs_discovered": 4, 00:15:00.624 "num_base_bdevs_operational": 4, 00:15:00.624 "base_bdevs_list": [ 00:15:00.624 { 00:15:00.624 "name": "pt1", 00:15:00.624 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:00.624 "is_configured": true, 00:15:00.624 "data_offset": 2048, 00:15:00.624 "data_size": 63488 00:15:00.624 }, 00:15:00.624 { 00:15:00.624 "name": "pt2", 00:15:00.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:00.624 "is_configured": true, 00:15:00.624 "data_offset": 2048, 00:15:00.624 "data_size": 63488 00:15:00.624 }, 00:15:00.624 { 00:15:00.624 "name": "pt3", 00:15:00.624 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:00.624 "is_configured": true, 00:15:00.624 "data_offset": 2048, 00:15:00.624 "data_size": 63488 00:15:00.624 }, 00:15:00.624 { 00:15:00.624 "name": "pt4", 00:15:00.624 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:00.624 "is_configured": true, 00:15:00.624 "data_offset": 2048, 00:15:00.624 "data_size": 63488 00:15:00.624 } 00:15:00.624 ] 00:15:00.624 }' 00:15:00.624 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.624 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.884 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:00.884 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:00.884 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:00.884 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:00.884 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:00.884 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:00.884 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:00.884 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:00.884 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.884 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.884 [2024-11-18 03:15:04.408531] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.884 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.884 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:00.884 "name": "raid_bdev1", 00:15:00.884 "aliases": [ 00:15:00.884 "33f5c9fd-192f-4546-845c-4e4cdc1fd2b7" 00:15:00.884 ], 00:15:00.884 "product_name": "Raid Volume", 00:15:00.884 "block_size": 512, 00:15:00.884 "num_blocks": 190464, 00:15:00.884 "uuid": "33f5c9fd-192f-4546-845c-4e4cdc1fd2b7", 00:15:00.884 "assigned_rate_limits": { 00:15:00.884 "rw_ios_per_sec": 0, 00:15:00.884 "rw_mbytes_per_sec": 0, 00:15:00.885 "r_mbytes_per_sec": 0, 00:15:00.885 "w_mbytes_per_sec": 0 00:15:00.885 }, 00:15:00.885 "claimed": false, 00:15:00.885 "zoned": false, 00:15:00.885 "supported_io_types": { 00:15:00.885 "read": true, 00:15:00.885 "write": true, 00:15:00.885 "unmap": false, 00:15:00.885 "flush": false, 00:15:00.885 "reset": true, 00:15:00.885 "nvme_admin": false, 00:15:00.885 "nvme_io": false, 00:15:00.885 "nvme_io_md": false, 00:15:00.885 "write_zeroes": true, 00:15:00.885 "zcopy": false, 00:15:00.885 "get_zone_info": false, 00:15:00.885 "zone_management": false, 00:15:00.885 "zone_append": false, 00:15:00.885 "compare": false, 00:15:00.885 "compare_and_write": false, 00:15:00.885 "abort": false, 00:15:00.885 "seek_hole": false, 00:15:00.885 "seek_data": false, 00:15:00.885 "copy": false, 00:15:00.885 "nvme_iov_md": false 00:15:00.885 }, 00:15:00.885 "driver_specific": { 00:15:00.885 "raid": { 00:15:00.885 "uuid": "33f5c9fd-192f-4546-845c-4e4cdc1fd2b7", 00:15:00.885 "strip_size_kb": 64, 00:15:00.885 "state": "online", 00:15:00.885 "raid_level": "raid5f", 00:15:00.885 "superblock": true, 00:15:00.885 "num_base_bdevs": 4, 00:15:00.885 "num_base_bdevs_discovered": 4, 00:15:00.885 "num_base_bdevs_operational": 4, 00:15:00.885 "base_bdevs_list": [ 00:15:00.885 { 00:15:00.885 "name": "pt1", 00:15:00.885 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:00.885 "is_configured": true, 00:15:00.885 "data_offset": 2048, 00:15:00.885 "data_size": 63488 00:15:00.885 }, 00:15:00.885 { 00:15:00.885 "name": "pt2", 00:15:00.885 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:00.885 "is_configured": true, 00:15:00.885 "data_offset": 2048, 00:15:00.885 "data_size": 63488 00:15:00.885 }, 00:15:00.885 { 00:15:00.885 "name": "pt3", 00:15:00.885 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:00.885 "is_configured": true, 00:15:00.885 "data_offset": 2048, 00:15:00.885 "data_size": 63488 00:15:00.885 }, 00:15:00.885 { 00:15:00.885 "name": "pt4", 00:15:00.885 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:00.885 "is_configured": true, 00:15:00.885 "data_offset": 2048, 00:15:00.885 "data_size": 63488 00:15:00.885 } 00:15:00.885 ] 00:15:00.885 } 00:15:00.885 } 00:15:00.885 }' 00:15:00.885 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:01.145 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:01.145 pt2 00:15:01.145 pt3 00:15:01.145 pt4' 00:15:01.145 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.145 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:01.145 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.145 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:01.145 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.145 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.145 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.145 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.145 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.145 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.145 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.146 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:01.146 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.146 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.146 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.146 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.146 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.146 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.146 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.146 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:01.146 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.146 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.146 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.146 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.146 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.146 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.146 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.146 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:01.146 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.146 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.146 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.146 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.406 [2024-11-18 03:15:04.751972] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=33f5c9fd-192f-4546-845c-4e4cdc1fd2b7 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 33f5c9fd-192f-4546-845c-4e4cdc1fd2b7 ']' 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.406 [2024-11-18 03:15:04.779703] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:01.406 [2024-11-18 03:15:04.779740] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:01.406 [2024-11-18 03:15:04.779821] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:01.406 [2024-11-18 03:15:04.779909] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:01.406 [2024-11-18 03:15:04.779932] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.406 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:01.407 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:01.407 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:01.407 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:01.407 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:01.407 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:01.407 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:01.407 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:01.407 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:01.407 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.407 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.407 [2024-11-18 03:15:04.923500] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:01.407 [2024-11-18 03:15:04.925303] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:01.407 [2024-11-18 03:15:04.925353] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:01.407 [2024-11-18 03:15:04.925381] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:01.407 [2024-11-18 03:15:04.925426] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:01.407 [2024-11-18 03:15:04.925477] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:01.407 [2024-11-18 03:15:04.925508] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:01.407 [2024-11-18 03:15:04.925523] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:01.407 [2024-11-18 03:15:04.925536] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:01.407 [2024-11-18 03:15:04.925547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:15:01.407 request: 00:15:01.407 { 00:15:01.407 "name": "raid_bdev1", 00:15:01.407 "raid_level": "raid5f", 00:15:01.407 "base_bdevs": [ 00:15:01.407 "malloc1", 00:15:01.407 "malloc2", 00:15:01.407 "malloc3", 00:15:01.407 "malloc4" 00:15:01.407 ], 00:15:01.407 "strip_size_kb": 64, 00:15:01.407 "superblock": false, 00:15:01.407 "method": "bdev_raid_create", 00:15:01.407 "req_id": 1 00:15:01.407 } 00:15:01.407 Got JSON-RPC error response 00:15:01.407 response: 00:15:01.407 { 00:15:01.407 "code": -17, 00:15:01.407 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:01.407 } 00:15:01.407 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:01.407 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:01.407 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:01.407 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:01.407 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:01.407 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.407 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.407 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.407 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:01.407 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.667 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:01.667 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:01.667 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:01.667 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.667 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.667 [2024-11-18 03:15:04.987336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:01.667 [2024-11-18 03:15:04.987391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.667 [2024-11-18 03:15:04.987411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:01.667 [2024-11-18 03:15:04.987420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.667 [2024-11-18 03:15:04.989535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.667 [2024-11-18 03:15:04.989570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:01.667 [2024-11-18 03:15:04.989640] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:01.667 [2024-11-18 03:15:04.989681] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:01.667 pt1 00:15:01.667 03:15:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.667 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:01.667 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.667 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.667 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.667 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.667 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:01.667 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.667 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.667 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.667 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.667 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.667 03:15:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.667 03:15:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.667 03:15:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.667 03:15:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.667 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.667 "name": "raid_bdev1", 00:15:01.667 "uuid": "33f5c9fd-192f-4546-845c-4e4cdc1fd2b7", 00:15:01.667 "strip_size_kb": 64, 00:15:01.667 "state": "configuring", 00:15:01.667 "raid_level": "raid5f", 00:15:01.667 "superblock": true, 00:15:01.667 "num_base_bdevs": 4, 00:15:01.667 "num_base_bdevs_discovered": 1, 00:15:01.667 "num_base_bdevs_operational": 4, 00:15:01.667 "base_bdevs_list": [ 00:15:01.667 { 00:15:01.667 "name": "pt1", 00:15:01.667 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:01.667 "is_configured": true, 00:15:01.667 "data_offset": 2048, 00:15:01.667 "data_size": 63488 00:15:01.667 }, 00:15:01.667 { 00:15:01.667 "name": null, 00:15:01.667 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:01.667 "is_configured": false, 00:15:01.667 "data_offset": 2048, 00:15:01.667 "data_size": 63488 00:15:01.667 }, 00:15:01.667 { 00:15:01.667 "name": null, 00:15:01.667 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:01.667 "is_configured": false, 00:15:01.667 "data_offset": 2048, 00:15:01.668 "data_size": 63488 00:15:01.668 }, 00:15:01.668 { 00:15:01.668 "name": null, 00:15:01.668 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:01.668 "is_configured": false, 00:15:01.668 "data_offset": 2048, 00:15:01.668 "data_size": 63488 00:15:01.668 } 00:15:01.668 ] 00:15:01.668 }' 00:15:01.668 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.668 03:15:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.929 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:01.929 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:01.929 03:15:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.929 03:15:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.929 [2024-11-18 03:15:05.374696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:01.929 [2024-11-18 03:15:05.374755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.929 [2024-11-18 03:15:05.374783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:01.929 [2024-11-18 03:15:05.374793] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.929 [2024-11-18 03:15:05.375247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.929 [2024-11-18 03:15:05.375275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:01.929 [2024-11-18 03:15:05.375352] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:01.929 [2024-11-18 03:15:05.375380] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:01.929 pt2 00:15:01.929 03:15:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.929 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:01.929 03:15:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.929 03:15:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.929 [2024-11-18 03:15:05.382695] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:01.929 03:15:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.929 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:01.929 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.929 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.929 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.929 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.929 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:01.929 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.929 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.929 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.929 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.929 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.929 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.929 03:15:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.929 03:15:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.929 03:15:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.929 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.929 "name": "raid_bdev1", 00:15:01.929 "uuid": "33f5c9fd-192f-4546-845c-4e4cdc1fd2b7", 00:15:01.929 "strip_size_kb": 64, 00:15:01.929 "state": "configuring", 00:15:01.929 "raid_level": "raid5f", 00:15:01.929 "superblock": true, 00:15:01.929 "num_base_bdevs": 4, 00:15:01.929 "num_base_bdevs_discovered": 1, 00:15:01.929 "num_base_bdevs_operational": 4, 00:15:01.929 "base_bdevs_list": [ 00:15:01.929 { 00:15:01.929 "name": "pt1", 00:15:01.929 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:01.929 "is_configured": true, 00:15:01.929 "data_offset": 2048, 00:15:01.929 "data_size": 63488 00:15:01.929 }, 00:15:01.929 { 00:15:01.929 "name": null, 00:15:01.929 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:01.929 "is_configured": false, 00:15:01.929 "data_offset": 0, 00:15:01.929 "data_size": 63488 00:15:01.929 }, 00:15:01.929 { 00:15:01.929 "name": null, 00:15:01.929 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:01.929 "is_configured": false, 00:15:01.929 "data_offset": 2048, 00:15:01.929 "data_size": 63488 00:15:01.929 }, 00:15:01.929 { 00:15:01.929 "name": null, 00:15:01.929 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:01.929 "is_configured": false, 00:15:01.929 "data_offset": 2048, 00:15:01.929 "data_size": 63488 00:15:01.929 } 00:15:01.929 ] 00:15:01.929 }' 00:15:01.929 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.929 03:15:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.500 [2024-11-18 03:15:05.778044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:02.500 [2024-11-18 03:15:05.778113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.500 [2024-11-18 03:15:05.778131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:02.500 [2024-11-18 03:15:05.778142] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.500 [2024-11-18 03:15:05.778543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.500 [2024-11-18 03:15:05.778577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:02.500 [2024-11-18 03:15:05.778649] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:02.500 [2024-11-18 03:15:05.778676] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:02.500 pt2 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.500 [2024-11-18 03:15:05.789952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:02.500 [2024-11-18 03:15:05.790018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.500 [2024-11-18 03:15:05.790045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:02.500 [2024-11-18 03:15:05.790056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.500 [2024-11-18 03:15:05.790389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.500 [2024-11-18 03:15:05.790420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:02.500 [2024-11-18 03:15:05.790480] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:02.500 [2024-11-18 03:15:05.790502] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:02.500 pt3 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.500 [2024-11-18 03:15:05.801933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:02.500 [2024-11-18 03:15:05.801999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.500 [2024-11-18 03:15:05.802016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:02.500 [2024-11-18 03:15:05.802025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.500 [2024-11-18 03:15:05.802318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.500 [2024-11-18 03:15:05.802342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:02.500 [2024-11-18 03:15:05.802394] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:02.500 [2024-11-18 03:15:05.802412] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:02.500 [2024-11-18 03:15:05.802509] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:02.500 [2024-11-18 03:15:05.802527] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:02.500 [2024-11-18 03:15:05.802748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:02.500 [2024-11-18 03:15:05.803223] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:02.500 [2024-11-18 03:15:05.803242] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:15:02.500 [2024-11-18 03:15:05.803340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.500 pt4 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.500 03:15:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.501 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.501 03:15:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.501 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.501 "name": "raid_bdev1", 00:15:02.501 "uuid": "33f5c9fd-192f-4546-845c-4e4cdc1fd2b7", 00:15:02.501 "strip_size_kb": 64, 00:15:02.501 "state": "online", 00:15:02.501 "raid_level": "raid5f", 00:15:02.501 "superblock": true, 00:15:02.501 "num_base_bdevs": 4, 00:15:02.501 "num_base_bdevs_discovered": 4, 00:15:02.501 "num_base_bdevs_operational": 4, 00:15:02.501 "base_bdevs_list": [ 00:15:02.501 { 00:15:02.501 "name": "pt1", 00:15:02.501 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:02.501 "is_configured": true, 00:15:02.501 "data_offset": 2048, 00:15:02.501 "data_size": 63488 00:15:02.501 }, 00:15:02.501 { 00:15:02.501 "name": "pt2", 00:15:02.501 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:02.501 "is_configured": true, 00:15:02.501 "data_offset": 2048, 00:15:02.501 "data_size": 63488 00:15:02.501 }, 00:15:02.501 { 00:15:02.501 "name": "pt3", 00:15:02.501 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:02.501 "is_configured": true, 00:15:02.501 "data_offset": 2048, 00:15:02.501 "data_size": 63488 00:15:02.501 }, 00:15:02.501 { 00:15:02.501 "name": "pt4", 00:15:02.501 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:02.501 "is_configured": true, 00:15:02.501 "data_offset": 2048, 00:15:02.501 "data_size": 63488 00:15:02.501 } 00:15:02.501 ] 00:15:02.501 }' 00:15:02.501 03:15:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.501 03:15:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.761 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:02.761 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:02.761 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:02.761 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:02.761 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:02.761 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:02.761 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:02.761 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.761 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.761 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:02.761 [2024-11-18 03:15:06.197483] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.761 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.761 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:02.761 "name": "raid_bdev1", 00:15:02.761 "aliases": [ 00:15:02.761 "33f5c9fd-192f-4546-845c-4e4cdc1fd2b7" 00:15:02.761 ], 00:15:02.761 "product_name": "Raid Volume", 00:15:02.761 "block_size": 512, 00:15:02.761 "num_blocks": 190464, 00:15:02.761 "uuid": "33f5c9fd-192f-4546-845c-4e4cdc1fd2b7", 00:15:02.761 "assigned_rate_limits": { 00:15:02.761 "rw_ios_per_sec": 0, 00:15:02.761 "rw_mbytes_per_sec": 0, 00:15:02.761 "r_mbytes_per_sec": 0, 00:15:02.761 "w_mbytes_per_sec": 0 00:15:02.761 }, 00:15:02.761 "claimed": false, 00:15:02.761 "zoned": false, 00:15:02.761 "supported_io_types": { 00:15:02.761 "read": true, 00:15:02.761 "write": true, 00:15:02.761 "unmap": false, 00:15:02.761 "flush": false, 00:15:02.761 "reset": true, 00:15:02.761 "nvme_admin": false, 00:15:02.761 "nvme_io": false, 00:15:02.761 "nvme_io_md": false, 00:15:02.761 "write_zeroes": true, 00:15:02.761 "zcopy": false, 00:15:02.761 "get_zone_info": false, 00:15:02.761 "zone_management": false, 00:15:02.761 "zone_append": false, 00:15:02.761 "compare": false, 00:15:02.761 "compare_and_write": false, 00:15:02.761 "abort": false, 00:15:02.761 "seek_hole": false, 00:15:02.761 "seek_data": false, 00:15:02.761 "copy": false, 00:15:02.761 "nvme_iov_md": false 00:15:02.761 }, 00:15:02.761 "driver_specific": { 00:15:02.761 "raid": { 00:15:02.761 "uuid": "33f5c9fd-192f-4546-845c-4e4cdc1fd2b7", 00:15:02.761 "strip_size_kb": 64, 00:15:02.761 "state": "online", 00:15:02.761 "raid_level": "raid5f", 00:15:02.761 "superblock": true, 00:15:02.761 "num_base_bdevs": 4, 00:15:02.761 "num_base_bdevs_discovered": 4, 00:15:02.761 "num_base_bdevs_operational": 4, 00:15:02.761 "base_bdevs_list": [ 00:15:02.761 { 00:15:02.761 "name": "pt1", 00:15:02.761 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:02.761 "is_configured": true, 00:15:02.761 "data_offset": 2048, 00:15:02.761 "data_size": 63488 00:15:02.761 }, 00:15:02.761 { 00:15:02.761 "name": "pt2", 00:15:02.761 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:02.761 "is_configured": true, 00:15:02.761 "data_offset": 2048, 00:15:02.761 "data_size": 63488 00:15:02.761 }, 00:15:02.761 { 00:15:02.761 "name": "pt3", 00:15:02.761 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:02.761 "is_configured": true, 00:15:02.762 "data_offset": 2048, 00:15:02.762 "data_size": 63488 00:15:02.762 }, 00:15:02.762 { 00:15:02.762 "name": "pt4", 00:15:02.762 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:02.762 "is_configured": true, 00:15:02.762 "data_offset": 2048, 00:15:02.762 "data_size": 63488 00:15:02.762 } 00:15:02.762 ] 00:15:02.762 } 00:15:02.762 } 00:15:02.762 }' 00:15:02.762 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:02.762 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:02.762 pt2 00:15:02.762 pt3 00:15:02.762 pt4' 00:15:02.762 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.762 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:02.762 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.762 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.762 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:02.762 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.762 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.022 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.022 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.022 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.022 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.022 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.022 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.023 [2024-11-18 03:15:06.464980] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 33f5c9fd-192f-4546-845c-4e4cdc1fd2b7 '!=' 33f5c9fd-192f-4546-845c-4e4cdc1fd2b7 ']' 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.023 [2024-11-18 03:15:06.512736] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.023 "name": "raid_bdev1", 00:15:03.023 "uuid": "33f5c9fd-192f-4546-845c-4e4cdc1fd2b7", 00:15:03.023 "strip_size_kb": 64, 00:15:03.023 "state": "online", 00:15:03.023 "raid_level": "raid5f", 00:15:03.023 "superblock": true, 00:15:03.023 "num_base_bdevs": 4, 00:15:03.023 "num_base_bdevs_discovered": 3, 00:15:03.023 "num_base_bdevs_operational": 3, 00:15:03.023 "base_bdevs_list": [ 00:15:03.023 { 00:15:03.023 "name": null, 00:15:03.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.023 "is_configured": false, 00:15:03.023 "data_offset": 0, 00:15:03.023 "data_size": 63488 00:15:03.023 }, 00:15:03.023 { 00:15:03.023 "name": "pt2", 00:15:03.023 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:03.023 "is_configured": true, 00:15:03.023 "data_offset": 2048, 00:15:03.023 "data_size": 63488 00:15:03.023 }, 00:15:03.023 { 00:15:03.023 "name": "pt3", 00:15:03.023 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:03.023 "is_configured": true, 00:15:03.023 "data_offset": 2048, 00:15:03.023 "data_size": 63488 00:15:03.023 }, 00:15:03.023 { 00:15:03.023 "name": "pt4", 00:15:03.023 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:03.023 "is_configured": true, 00:15:03.023 "data_offset": 2048, 00:15:03.023 "data_size": 63488 00:15:03.023 } 00:15:03.023 ] 00:15:03.023 }' 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.023 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.594 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:03.594 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.595 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.595 [2024-11-18 03:15:06.951927] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:03.595 [2024-11-18 03:15:06.951968] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:03.595 [2024-11-18 03:15:06.952054] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.595 [2024-11-18 03:15:06.952124] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:03.595 [2024-11-18 03:15:06.952135] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:15:03.595 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.595 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:03.595 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.595 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.595 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.595 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.595 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:03.595 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:03.595 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:03.595 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:03.595 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:03.595 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.595 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.595 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.595 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:03.595 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:03.595 03:15:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:03.595 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.595 03:15:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.595 [2024-11-18 03:15:07.023782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:03.595 [2024-11-18 03:15:07.023842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.595 [2024-11-18 03:15:07.023859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:03.595 [2024-11-18 03:15:07.023869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.595 [2024-11-18 03:15:07.025906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.595 [2024-11-18 03:15:07.025946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:03.595 [2024-11-18 03:15:07.026023] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:03.595 [2024-11-18 03:15:07.026057] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:03.595 pt2 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.595 "name": "raid_bdev1", 00:15:03.595 "uuid": "33f5c9fd-192f-4546-845c-4e4cdc1fd2b7", 00:15:03.595 "strip_size_kb": 64, 00:15:03.595 "state": "configuring", 00:15:03.595 "raid_level": "raid5f", 00:15:03.595 "superblock": true, 00:15:03.595 "num_base_bdevs": 4, 00:15:03.595 "num_base_bdevs_discovered": 1, 00:15:03.595 "num_base_bdevs_operational": 3, 00:15:03.595 "base_bdevs_list": [ 00:15:03.595 { 00:15:03.595 "name": null, 00:15:03.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.595 "is_configured": false, 00:15:03.595 "data_offset": 2048, 00:15:03.595 "data_size": 63488 00:15:03.595 }, 00:15:03.595 { 00:15:03.595 "name": "pt2", 00:15:03.595 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:03.595 "is_configured": true, 00:15:03.595 "data_offset": 2048, 00:15:03.595 "data_size": 63488 00:15:03.595 }, 00:15:03.595 { 00:15:03.595 "name": null, 00:15:03.595 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:03.595 "is_configured": false, 00:15:03.595 "data_offset": 2048, 00:15:03.595 "data_size": 63488 00:15:03.595 }, 00:15:03.595 { 00:15:03.595 "name": null, 00:15:03.595 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:03.595 "is_configured": false, 00:15:03.595 "data_offset": 2048, 00:15:03.595 "data_size": 63488 00:15:03.595 } 00:15:03.595 ] 00:15:03.595 }' 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.595 03:15:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.856 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:03.856 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:03.856 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:03.856 03:15:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.856 03:15:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.856 [2024-11-18 03:15:07.415136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:03.856 [2024-11-18 03:15:07.415192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.856 [2024-11-18 03:15:07.415210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:03.856 [2024-11-18 03:15:07.415223] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.856 [2024-11-18 03:15:07.415603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.856 [2024-11-18 03:15:07.415641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:03.856 [2024-11-18 03:15:07.415712] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:03.856 [2024-11-18 03:15:07.415744] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:03.856 pt3 00:15:03.856 03:15:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.856 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:03.856 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.856 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.856 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.856 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.856 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.856 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.856 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.856 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.856 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.856 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.856 03:15:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.856 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.856 03:15:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.116 03:15:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.116 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.116 "name": "raid_bdev1", 00:15:04.116 "uuid": "33f5c9fd-192f-4546-845c-4e4cdc1fd2b7", 00:15:04.116 "strip_size_kb": 64, 00:15:04.116 "state": "configuring", 00:15:04.116 "raid_level": "raid5f", 00:15:04.116 "superblock": true, 00:15:04.116 "num_base_bdevs": 4, 00:15:04.116 "num_base_bdevs_discovered": 2, 00:15:04.116 "num_base_bdevs_operational": 3, 00:15:04.116 "base_bdevs_list": [ 00:15:04.116 { 00:15:04.116 "name": null, 00:15:04.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.116 "is_configured": false, 00:15:04.116 "data_offset": 2048, 00:15:04.116 "data_size": 63488 00:15:04.116 }, 00:15:04.116 { 00:15:04.116 "name": "pt2", 00:15:04.116 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.116 "is_configured": true, 00:15:04.116 "data_offset": 2048, 00:15:04.116 "data_size": 63488 00:15:04.117 }, 00:15:04.117 { 00:15:04.117 "name": "pt3", 00:15:04.117 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:04.117 "is_configured": true, 00:15:04.117 "data_offset": 2048, 00:15:04.117 "data_size": 63488 00:15:04.117 }, 00:15:04.117 { 00:15:04.117 "name": null, 00:15:04.117 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:04.117 "is_configured": false, 00:15:04.117 "data_offset": 2048, 00:15:04.117 "data_size": 63488 00:15:04.117 } 00:15:04.117 ] 00:15:04.117 }' 00:15:04.117 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.117 03:15:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.377 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:04.377 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:04.377 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:15:04.377 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:04.377 03:15:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.377 03:15:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.377 [2024-11-18 03:15:07.854421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:04.377 [2024-11-18 03:15:07.854496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.377 [2024-11-18 03:15:07.854524] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:04.377 [2024-11-18 03:15:07.854538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.377 [2024-11-18 03:15:07.854944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.377 [2024-11-18 03:15:07.854985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:04.377 [2024-11-18 03:15:07.855064] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:04.377 [2024-11-18 03:15:07.855094] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:04.377 [2024-11-18 03:15:07.855194] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:15:04.377 [2024-11-18 03:15:07.855211] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:04.377 [2024-11-18 03:15:07.855449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:04.377 [2024-11-18 03:15:07.856004] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:15:04.377 [2024-11-18 03:15:07.856022] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:15:04.378 [2024-11-18 03:15:07.856264] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.378 pt4 00:15:04.378 03:15:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.378 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:04.378 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.378 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.378 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.378 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.378 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.378 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.378 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.378 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.378 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.378 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.378 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.378 03:15:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.378 03:15:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.378 03:15:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.378 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.378 "name": "raid_bdev1", 00:15:04.378 "uuid": "33f5c9fd-192f-4546-845c-4e4cdc1fd2b7", 00:15:04.378 "strip_size_kb": 64, 00:15:04.378 "state": "online", 00:15:04.378 "raid_level": "raid5f", 00:15:04.378 "superblock": true, 00:15:04.378 "num_base_bdevs": 4, 00:15:04.378 "num_base_bdevs_discovered": 3, 00:15:04.378 "num_base_bdevs_operational": 3, 00:15:04.378 "base_bdevs_list": [ 00:15:04.378 { 00:15:04.378 "name": null, 00:15:04.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.378 "is_configured": false, 00:15:04.378 "data_offset": 2048, 00:15:04.378 "data_size": 63488 00:15:04.378 }, 00:15:04.378 { 00:15:04.378 "name": "pt2", 00:15:04.378 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.378 "is_configured": true, 00:15:04.378 "data_offset": 2048, 00:15:04.378 "data_size": 63488 00:15:04.378 }, 00:15:04.378 { 00:15:04.378 "name": "pt3", 00:15:04.378 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:04.378 "is_configured": true, 00:15:04.378 "data_offset": 2048, 00:15:04.378 "data_size": 63488 00:15:04.378 }, 00:15:04.378 { 00:15:04.378 "name": "pt4", 00:15:04.378 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:04.378 "is_configured": true, 00:15:04.378 "data_offset": 2048, 00:15:04.378 "data_size": 63488 00:15:04.378 } 00:15:04.378 ] 00:15:04.378 }' 00:15:04.378 03:15:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.378 03:15:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.949 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:04.949 03:15:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.949 03:15:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.949 [2024-11-18 03:15:08.277681] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:04.949 [2024-11-18 03:15:08.277717] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:04.949 [2024-11-18 03:15:08.277791] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:04.949 [2024-11-18 03:15:08.277865] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:04.949 [2024-11-18 03:15:08.277874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:15:04.949 03:15:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.949 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.949 03:15:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.949 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:04.949 03:15:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.949 03:15:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.949 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:04.949 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:04.949 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:15:04.949 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:15:04.949 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:15:04.949 03:15:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.949 03:15:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.949 03:15:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.949 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:04.949 03:15:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.950 03:15:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.950 [2024-11-18 03:15:08.329598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:04.950 [2024-11-18 03:15:08.329661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.950 [2024-11-18 03:15:08.329681] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:15:04.950 [2024-11-18 03:15:08.329690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.950 [2024-11-18 03:15:08.331905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.950 [2024-11-18 03:15:08.331943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:04.950 [2024-11-18 03:15:08.332025] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:04.950 [2024-11-18 03:15:08.332071] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:04.950 [2024-11-18 03:15:08.332176] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:04.950 [2024-11-18 03:15:08.332196] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:04.950 [2024-11-18 03:15:08.332213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:15:04.950 [2024-11-18 03:15:08.332253] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:04.950 [2024-11-18 03:15:08.332364] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:04.950 pt1 00:15:04.950 03:15:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.950 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:15:04.950 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:04.950 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.950 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:04.950 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.950 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.950 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.950 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.950 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.950 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.950 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.950 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.950 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.950 03:15:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.950 03:15:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.950 03:15:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.950 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.950 "name": "raid_bdev1", 00:15:04.950 "uuid": "33f5c9fd-192f-4546-845c-4e4cdc1fd2b7", 00:15:04.950 "strip_size_kb": 64, 00:15:04.950 "state": "configuring", 00:15:04.950 "raid_level": "raid5f", 00:15:04.950 "superblock": true, 00:15:04.950 "num_base_bdevs": 4, 00:15:04.950 "num_base_bdevs_discovered": 2, 00:15:04.950 "num_base_bdevs_operational": 3, 00:15:04.950 "base_bdevs_list": [ 00:15:04.950 { 00:15:04.950 "name": null, 00:15:04.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.950 "is_configured": false, 00:15:04.950 "data_offset": 2048, 00:15:04.950 "data_size": 63488 00:15:04.950 }, 00:15:04.950 { 00:15:04.950 "name": "pt2", 00:15:04.950 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.950 "is_configured": true, 00:15:04.950 "data_offset": 2048, 00:15:04.950 "data_size": 63488 00:15:04.950 }, 00:15:04.950 { 00:15:04.950 "name": "pt3", 00:15:04.950 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:04.950 "is_configured": true, 00:15:04.950 "data_offset": 2048, 00:15:04.950 "data_size": 63488 00:15:04.950 }, 00:15:04.950 { 00:15:04.950 "name": null, 00:15:04.950 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:04.950 "is_configured": false, 00:15:04.950 "data_offset": 2048, 00:15:04.950 "data_size": 63488 00:15:04.950 } 00:15:04.950 ] 00:15:04.950 }' 00:15:04.950 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.950 03:15:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.520 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:05.520 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:05.520 03:15:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.520 03:15:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.520 03:15:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.520 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:05.520 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:05.520 03:15:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.520 03:15:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.520 [2024-11-18 03:15:08.816779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:05.520 [2024-11-18 03:15:08.816847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.520 [2024-11-18 03:15:08.816884] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:05.520 [2024-11-18 03:15:08.816897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.520 [2024-11-18 03:15:08.817344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.520 [2024-11-18 03:15:08.817377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:05.520 [2024-11-18 03:15:08.817452] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:05.520 [2024-11-18 03:15:08.817483] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:05.520 [2024-11-18 03:15:08.817593] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:05.521 [2024-11-18 03:15:08.817607] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:05.521 [2024-11-18 03:15:08.817860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:05.521 [2024-11-18 03:15:08.818477] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:05.521 [2024-11-18 03:15:08.818500] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:05.521 [2024-11-18 03:15:08.818695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.521 pt4 00:15:05.521 03:15:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.521 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:05.521 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.521 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.521 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.521 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.521 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.521 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.521 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.521 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.521 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.521 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.521 03:15:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.521 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.521 03:15:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.521 03:15:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.521 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.521 "name": "raid_bdev1", 00:15:05.521 "uuid": "33f5c9fd-192f-4546-845c-4e4cdc1fd2b7", 00:15:05.521 "strip_size_kb": 64, 00:15:05.521 "state": "online", 00:15:05.521 "raid_level": "raid5f", 00:15:05.521 "superblock": true, 00:15:05.521 "num_base_bdevs": 4, 00:15:05.521 "num_base_bdevs_discovered": 3, 00:15:05.521 "num_base_bdevs_operational": 3, 00:15:05.521 "base_bdevs_list": [ 00:15:05.521 { 00:15:05.521 "name": null, 00:15:05.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.521 "is_configured": false, 00:15:05.521 "data_offset": 2048, 00:15:05.521 "data_size": 63488 00:15:05.521 }, 00:15:05.521 { 00:15:05.521 "name": "pt2", 00:15:05.521 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:05.521 "is_configured": true, 00:15:05.521 "data_offset": 2048, 00:15:05.521 "data_size": 63488 00:15:05.521 }, 00:15:05.521 { 00:15:05.521 "name": "pt3", 00:15:05.521 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:05.521 "is_configured": true, 00:15:05.521 "data_offset": 2048, 00:15:05.521 "data_size": 63488 00:15:05.521 }, 00:15:05.521 { 00:15:05.521 "name": "pt4", 00:15:05.521 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:05.521 "is_configured": true, 00:15:05.521 "data_offset": 2048, 00:15:05.521 "data_size": 63488 00:15:05.521 } 00:15:05.521 ] 00:15:05.521 }' 00:15:05.521 03:15:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.521 03:15:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.781 03:15:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:05.781 03:15:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:05.781 03:15:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.781 03:15:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.781 03:15:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.781 03:15:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:05.781 03:15:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:05.781 03:15:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.781 03:15:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.781 03:15:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:05.781 [2024-11-18 03:15:09.300296] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:05.781 03:15:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.781 03:15:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 33f5c9fd-192f-4546-845c-4e4cdc1fd2b7 '!=' 33f5c9fd-192f-4546-845c-4e4cdc1fd2b7 ']' 00:15:05.781 03:15:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 94668 00:15:05.781 03:15:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 94668 ']' 00:15:05.781 03:15:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 94668 00:15:05.781 03:15:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:15:05.781 03:15:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:05.781 03:15:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94668 00:15:06.041 killing process with pid 94668 00:15:06.041 03:15:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:06.041 03:15:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:06.041 03:15:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94668' 00:15:06.041 03:15:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 94668 00:15:06.041 [2024-11-18 03:15:09.369619] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:06.041 [2024-11-18 03:15:09.369710] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:06.041 [2024-11-18 03:15:09.369788] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:06.041 [2024-11-18 03:15:09.369798] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:06.041 03:15:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 94668 00:15:06.041 [2024-11-18 03:15:09.414100] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:06.301 ************************************ 00:15:06.301 END TEST raid5f_superblock_test 00:15:06.301 ************************************ 00:15:06.301 03:15:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:06.301 00:15:06.301 real 0m6.721s 00:15:06.301 user 0m11.206s 00:15:06.301 sys 0m1.465s 00:15:06.302 03:15:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:06.302 03:15:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.302 03:15:09 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:06.302 03:15:09 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:15:06.302 03:15:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:06.302 03:15:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:06.302 03:15:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:06.302 ************************************ 00:15:06.302 START TEST raid5f_rebuild_test 00:15:06.302 ************************************ 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=95137 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 95137 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 95137 ']' 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:06.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:06.302 03:15:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.302 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:06.302 Zero copy mechanism will not be used. 00:15:06.302 [2024-11-18 03:15:09.800790] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:06.302 [2024-11-18 03:15:09.800916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95137 ] 00:15:06.562 [2024-11-18 03:15:09.960273] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.562 [2024-11-18 03:15:10.011098] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.563 [2024-11-18 03:15:10.053270] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:06.563 [2024-11-18 03:15:10.053313] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:07.133 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:07.133 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:15:07.133 03:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:07.133 03:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:07.133 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.133 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.133 BaseBdev1_malloc 00:15:07.133 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.133 03:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:07.133 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.133 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.133 [2024-11-18 03:15:10.667512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:07.133 [2024-11-18 03:15:10.667583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.133 [2024-11-18 03:15:10.667614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:07.133 [2024-11-18 03:15:10.667628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.133 [2024-11-18 03:15:10.669678] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.133 [2024-11-18 03:15:10.669717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:07.133 BaseBdev1 00:15:07.133 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.133 03:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:07.133 03:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:07.133 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.133 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.133 BaseBdev2_malloc 00:15:07.133 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.133 03:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:07.133 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.133 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.133 [2024-11-18 03:15:10.700700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:07.133 [2024-11-18 03:15:10.700769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.133 [2024-11-18 03:15:10.700794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:07.133 [2024-11-18 03:15:10.700805] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.133 [2024-11-18 03:15:10.703006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.133 [2024-11-18 03:15:10.703045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:07.133 BaseBdev2 00:15:07.134 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.134 03:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:07.134 03:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:07.134 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.134 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.394 BaseBdev3_malloc 00:15:07.394 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.394 03:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:07.394 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.394 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.394 [2024-11-18 03:15:10.729309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:07.394 [2024-11-18 03:15:10.729367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.394 [2024-11-18 03:15:10.729391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:07.394 [2024-11-18 03:15:10.729400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.394 [2024-11-18 03:15:10.731409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.394 [2024-11-18 03:15:10.731447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:07.394 BaseBdev3 00:15:07.394 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.394 03:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:07.394 03:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:07.394 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.394 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.394 BaseBdev4_malloc 00:15:07.394 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.394 03:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:07.394 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.394 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.394 [2024-11-18 03:15:10.757875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:07.394 [2024-11-18 03:15:10.757937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.395 [2024-11-18 03:15:10.757973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:07.395 [2024-11-18 03:15:10.757982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.395 [2024-11-18 03:15:10.759992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.395 [2024-11-18 03:15:10.760028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:07.395 BaseBdev4 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.395 spare_malloc 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.395 spare_delay 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.395 [2024-11-18 03:15:10.798439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:07.395 [2024-11-18 03:15:10.798515] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.395 [2024-11-18 03:15:10.798538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:07.395 [2024-11-18 03:15:10.798547] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.395 [2024-11-18 03:15:10.800581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.395 [2024-11-18 03:15:10.800620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:07.395 spare 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.395 [2024-11-18 03:15:10.810510] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:07.395 [2024-11-18 03:15:10.812249] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:07.395 [2024-11-18 03:15:10.812319] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:07.395 [2024-11-18 03:15:10.812358] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:07.395 [2024-11-18 03:15:10.812437] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:07.395 [2024-11-18 03:15:10.812445] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:07.395 [2024-11-18 03:15:10.812679] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:07.395 [2024-11-18 03:15:10.813130] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:07.395 [2024-11-18 03:15:10.813153] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:07.395 [2024-11-18 03:15:10.813265] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.395 "name": "raid_bdev1", 00:15:07.395 "uuid": "e2a7e6da-f3d1-4a40-a2ac-446617524588", 00:15:07.395 "strip_size_kb": 64, 00:15:07.395 "state": "online", 00:15:07.395 "raid_level": "raid5f", 00:15:07.395 "superblock": false, 00:15:07.395 "num_base_bdevs": 4, 00:15:07.395 "num_base_bdevs_discovered": 4, 00:15:07.395 "num_base_bdevs_operational": 4, 00:15:07.395 "base_bdevs_list": [ 00:15:07.395 { 00:15:07.395 "name": "BaseBdev1", 00:15:07.395 "uuid": "0f4cd197-7c1b-5297-b90a-a27261b84863", 00:15:07.395 "is_configured": true, 00:15:07.395 "data_offset": 0, 00:15:07.395 "data_size": 65536 00:15:07.395 }, 00:15:07.395 { 00:15:07.395 "name": "BaseBdev2", 00:15:07.395 "uuid": "5f6f6349-55ea-586c-93a5-7b4bbaf34b53", 00:15:07.395 "is_configured": true, 00:15:07.395 "data_offset": 0, 00:15:07.395 "data_size": 65536 00:15:07.395 }, 00:15:07.395 { 00:15:07.395 "name": "BaseBdev3", 00:15:07.395 "uuid": "9fa2ed43-5581-5e48-b2af-e70cd259e847", 00:15:07.395 "is_configured": true, 00:15:07.395 "data_offset": 0, 00:15:07.395 "data_size": 65536 00:15:07.395 }, 00:15:07.395 { 00:15:07.395 "name": "BaseBdev4", 00:15:07.395 "uuid": "2a61eaec-bb70-51a5-984f-894f17000fb5", 00:15:07.395 "is_configured": true, 00:15:07.395 "data_offset": 0, 00:15:07.395 "data_size": 65536 00:15:07.395 } 00:15:07.395 ] 00:15:07.395 }' 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.395 03:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.655 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:07.655 03:15:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.655 03:15:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.655 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:07.655 [2024-11-18 03:15:11.222554] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:07.915 03:15:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.915 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:15:07.915 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:07.915 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.915 03:15:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.915 03:15:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.915 03:15:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.915 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:07.915 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:07.915 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:07.915 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:07.915 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:07.915 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:07.915 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:07.915 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:07.915 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:07.915 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:07.915 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:07.915 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:07.915 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:07.915 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:07.915 [2024-11-18 03:15:11.446039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:07.915 /dev/nbd0 00:15:07.915 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:07.915 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:07.915 03:15:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:07.915 03:15:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:07.915 03:15:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:07.915 03:15:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:07.915 03:15:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:08.176 03:15:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:08.176 03:15:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:08.176 03:15:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:08.176 03:15:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:08.176 1+0 records in 00:15:08.176 1+0 records out 00:15:08.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000168579 s, 24.3 MB/s 00:15:08.176 03:15:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.176 03:15:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:08.176 03:15:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.176 03:15:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:08.176 03:15:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:08.176 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:08.176 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:08.176 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:08.176 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:08.176 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:08.176 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:15:08.436 512+0 records in 00:15:08.436 512+0 records out 00:15:08.436 100663296 bytes (101 MB, 96 MiB) copied, 0.434995 s, 231 MB/s 00:15:08.436 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:08.436 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:08.436 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:08.436 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:08.436 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:08.436 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:08.436 03:15:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:08.696 03:15:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:08.696 03:15:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:08.696 03:15:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:08.696 03:15:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:08.696 03:15:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:08.696 03:15:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:08.696 [2024-11-18 03:15:12.140439] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.696 03:15:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:08.696 03:15:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:08.696 03:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:08.696 03:15:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.696 03:15:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.696 [2024-11-18 03:15:12.148490] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:08.696 03:15:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.696 03:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:08.696 03:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.696 03:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.696 03:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:08.696 03:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.696 03:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:08.696 03:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.696 03:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.696 03:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.696 03:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.696 03:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.696 03:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.696 03:15:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.696 03:15:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.696 03:15:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.696 03:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.696 "name": "raid_bdev1", 00:15:08.696 "uuid": "e2a7e6da-f3d1-4a40-a2ac-446617524588", 00:15:08.696 "strip_size_kb": 64, 00:15:08.696 "state": "online", 00:15:08.696 "raid_level": "raid5f", 00:15:08.696 "superblock": false, 00:15:08.696 "num_base_bdevs": 4, 00:15:08.696 "num_base_bdevs_discovered": 3, 00:15:08.696 "num_base_bdevs_operational": 3, 00:15:08.696 "base_bdevs_list": [ 00:15:08.696 { 00:15:08.697 "name": null, 00:15:08.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.697 "is_configured": false, 00:15:08.697 "data_offset": 0, 00:15:08.697 "data_size": 65536 00:15:08.697 }, 00:15:08.697 { 00:15:08.697 "name": "BaseBdev2", 00:15:08.697 "uuid": "5f6f6349-55ea-586c-93a5-7b4bbaf34b53", 00:15:08.697 "is_configured": true, 00:15:08.697 "data_offset": 0, 00:15:08.697 "data_size": 65536 00:15:08.697 }, 00:15:08.697 { 00:15:08.697 "name": "BaseBdev3", 00:15:08.697 "uuid": "9fa2ed43-5581-5e48-b2af-e70cd259e847", 00:15:08.697 "is_configured": true, 00:15:08.697 "data_offset": 0, 00:15:08.697 "data_size": 65536 00:15:08.697 }, 00:15:08.697 { 00:15:08.697 "name": "BaseBdev4", 00:15:08.697 "uuid": "2a61eaec-bb70-51a5-984f-894f17000fb5", 00:15:08.697 "is_configured": true, 00:15:08.697 "data_offset": 0, 00:15:08.697 "data_size": 65536 00:15:08.697 } 00:15:08.697 ] 00:15:08.697 }' 00:15:08.697 03:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.697 03:15:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.266 03:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:09.266 03:15:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.266 03:15:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.266 [2024-11-18 03:15:12.547879] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:09.266 [2024-11-18 03:15:12.551389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:15:09.266 [2024-11-18 03:15:12.553632] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:09.266 03:15:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.266 03:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:10.205 03:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.205 03:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.205 03:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:10.205 03:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:10.205 03:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.205 03:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.205 03:15:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.205 03:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.205 03:15:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.205 03:15:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.205 03:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.205 "name": "raid_bdev1", 00:15:10.205 "uuid": "e2a7e6da-f3d1-4a40-a2ac-446617524588", 00:15:10.205 "strip_size_kb": 64, 00:15:10.205 "state": "online", 00:15:10.205 "raid_level": "raid5f", 00:15:10.205 "superblock": false, 00:15:10.205 "num_base_bdevs": 4, 00:15:10.205 "num_base_bdevs_discovered": 4, 00:15:10.205 "num_base_bdevs_operational": 4, 00:15:10.205 "process": { 00:15:10.205 "type": "rebuild", 00:15:10.205 "target": "spare", 00:15:10.205 "progress": { 00:15:10.205 "blocks": 19200, 00:15:10.205 "percent": 9 00:15:10.205 } 00:15:10.205 }, 00:15:10.205 "base_bdevs_list": [ 00:15:10.205 { 00:15:10.205 "name": "spare", 00:15:10.205 "uuid": "33dd8815-2fc6-505f-9758-6791835f4ded", 00:15:10.205 "is_configured": true, 00:15:10.205 "data_offset": 0, 00:15:10.205 "data_size": 65536 00:15:10.206 }, 00:15:10.206 { 00:15:10.206 "name": "BaseBdev2", 00:15:10.206 "uuid": "5f6f6349-55ea-586c-93a5-7b4bbaf34b53", 00:15:10.206 "is_configured": true, 00:15:10.206 "data_offset": 0, 00:15:10.206 "data_size": 65536 00:15:10.206 }, 00:15:10.206 { 00:15:10.206 "name": "BaseBdev3", 00:15:10.206 "uuid": "9fa2ed43-5581-5e48-b2af-e70cd259e847", 00:15:10.206 "is_configured": true, 00:15:10.206 "data_offset": 0, 00:15:10.206 "data_size": 65536 00:15:10.206 }, 00:15:10.206 { 00:15:10.206 "name": "BaseBdev4", 00:15:10.206 "uuid": "2a61eaec-bb70-51a5-984f-894f17000fb5", 00:15:10.206 "is_configured": true, 00:15:10.206 "data_offset": 0, 00:15:10.206 "data_size": 65536 00:15:10.206 } 00:15:10.206 ] 00:15:10.206 }' 00:15:10.206 03:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.206 03:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:10.206 03:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.206 03:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.206 03:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:10.206 03:15:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.206 03:15:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.206 [2024-11-18 03:15:13.712549] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:10.206 [2024-11-18 03:15:13.760093] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:10.206 [2024-11-18 03:15:13.760156] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.206 [2024-11-18 03:15:13.760178] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:10.206 [2024-11-18 03:15:13.760186] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:10.206 03:15:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.206 03:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:10.206 03:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.206 03:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.206 03:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.206 03:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.206 03:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.206 03:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.206 03:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.206 03:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.206 03:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.206 03:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.206 03:15:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.206 03:15:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.206 03:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.466 03:15:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.466 03:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.466 "name": "raid_bdev1", 00:15:10.466 "uuid": "e2a7e6da-f3d1-4a40-a2ac-446617524588", 00:15:10.466 "strip_size_kb": 64, 00:15:10.466 "state": "online", 00:15:10.466 "raid_level": "raid5f", 00:15:10.466 "superblock": false, 00:15:10.466 "num_base_bdevs": 4, 00:15:10.466 "num_base_bdevs_discovered": 3, 00:15:10.467 "num_base_bdevs_operational": 3, 00:15:10.467 "base_bdevs_list": [ 00:15:10.467 { 00:15:10.467 "name": null, 00:15:10.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.467 "is_configured": false, 00:15:10.467 "data_offset": 0, 00:15:10.467 "data_size": 65536 00:15:10.467 }, 00:15:10.467 { 00:15:10.467 "name": "BaseBdev2", 00:15:10.467 "uuid": "5f6f6349-55ea-586c-93a5-7b4bbaf34b53", 00:15:10.467 "is_configured": true, 00:15:10.467 "data_offset": 0, 00:15:10.467 "data_size": 65536 00:15:10.467 }, 00:15:10.467 { 00:15:10.467 "name": "BaseBdev3", 00:15:10.467 "uuid": "9fa2ed43-5581-5e48-b2af-e70cd259e847", 00:15:10.467 "is_configured": true, 00:15:10.467 "data_offset": 0, 00:15:10.467 "data_size": 65536 00:15:10.467 }, 00:15:10.467 { 00:15:10.467 "name": "BaseBdev4", 00:15:10.467 "uuid": "2a61eaec-bb70-51a5-984f-894f17000fb5", 00:15:10.467 "is_configured": true, 00:15:10.467 "data_offset": 0, 00:15:10.467 "data_size": 65536 00:15:10.467 } 00:15:10.467 ] 00:15:10.467 }' 00:15:10.467 03:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.467 03:15:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.727 03:15:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:10.727 03:15:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.727 03:15:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:10.727 03:15:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:10.727 03:15:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.727 03:15:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.727 03:15:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.727 03:15:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.727 03:15:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.727 03:15:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.727 03:15:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.727 "name": "raid_bdev1", 00:15:10.727 "uuid": "e2a7e6da-f3d1-4a40-a2ac-446617524588", 00:15:10.727 "strip_size_kb": 64, 00:15:10.727 "state": "online", 00:15:10.727 "raid_level": "raid5f", 00:15:10.727 "superblock": false, 00:15:10.727 "num_base_bdevs": 4, 00:15:10.727 "num_base_bdevs_discovered": 3, 00:15:10.727 "num_base_bdevs_operational": 3, 00:15:10.727 "base_bdevs_list": [ 00:15:10.727 { 00:15:10.727 "name": null, 00:15:10.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.727 "is_configured": false, 00:15:10.727 "data_offset": 0, 00:15:10.727 "data_size": 65536 00:15:10.727 }, 00:15:10.727 { 00:15:10.727 "name": "BaseBdev2", 00:15:10.727 "uuid": "5f6f6349-55ea-586c-93a5-7b4bbaf34b53", 00:15:10.727 "is_configured": true, 00:15:10.727 "data_offset": 0, 00:15:10.727 "data_size": 65536 00:15:10.727 }, 00:15:10.727 { 00:15:10.727 "name": "BaseBdev3", 00:15:10.727 "uuid": "9fa2ed43-5581-5e48-b2af-e70cd259e847", 00:15:10.727 "is_configured": true, 00:15:10.727 "data_offset": 0, 00:15:10.727 "data_size": 65536 00:15:10.727 }, 00:15:10.727 { 00:15:10.727 "name": "BaseBdev4", 00:15:10.727 "uuid": "2a61eaec-bb70-51a5-984f-894f17000fb5", 00:15:10.727 "is_configured": true, 00:15:10.727 "data_offset": 0, 00:15:10.727 "data_size": 65536 00:15:10.727 } 00:15:10.727 ] 00:15:10.727 }' 00:15:10.727 03:15:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.727 03:15:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:10.727 03:15:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.988 03:15:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:10.988 03:15:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:10.988 03:15:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.988 03:15:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.988 [2024-11-18 03:15:14.316724] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:10.988 [2024-11-18 03:15:14.319997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:10.988 [2024-11-18 03:15:14.322195] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:10.988 03:15:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.988 03:15:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.950 "name": "raid_bdev1", 00:15:11.950 "uuid": "e2a7e6da-f3d1-4a40-a2ac-446617524588", 00:15:11.950 "strip_size_kb": 64, 00:15:11.950 "state": "online", 00:15:11.950 "raid_level": "raid5f", 00:15:11.950 "superblock": false, 00:15:11.950 "num_base_bdevs": 4, 00:15:11.950 "num_base_bdevs_discovered": 4, 00:15:11.950 "num_base_bdevs_operational": 4, 00:15:11.950 "process": { 00:15:11.950 "type": "rebuild", 00:15:11.950 "target": "spare", 00:15:11.950 "progress": { 00:15:11.950 "blocks": 19200, 00:15:11.950 "percent": 9 00:15:11.950 } 00:15:11.950 }, 00:15:11.950 "base_bdevs_list": [ 00:15:11.950 { 00:15:11.950 "name": "spare", 00:15:11.950 "uuid": "33dd8815-2fc6-505f-9758-6791835f4ded", 00:15:11.950 "is_configured": true, 00:15:11.950 "data_offset": 0, 00:15:11.950 "data_size": 65536 00:15:11.950 }, 00:15:11.950 { 00:15:11.950 "name": "BaseBdev2", 00:15:11.950 "uuid": "5f6f6349-55ea-586c-93a5-7b4bbaf34b53", 00:15:11.950 "is_configured": true, 00:15:11.950 "data_offset": 0, 00:15:11.950 "data_size": 65536 00:15:11.950 }, 00:15:11.950 { 00:15:11.950 "name": "BaseBdev3", 00:15:11.950 "uuid": "9fa2ed43-5581-5e48-b2af-e70cd259e847", 00:15:11.950 "is_configured": true, 00:15:11.950 "data_offset": 0, 00:15:11.950 "data_size": 65536 00:15:11.950 }, 00:15:11.950 { 00:15:11.950 "name": "BaseBdev4", 00:15:11.950 "uuid": "2a61eaec-bb70-51a5-984f-894f17000fb5", 00:15:11.950 "is_configured": true, 00:15:11.950 "data_offset": 0, 00:15:11.950 "data_size": 65536 00:15:11.950 } 00:15:11.950 ] 00:15:11.950 }' 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=509 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.950 03:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.950 "name": "raid_bdev1", 00:15:11.950 "uuid": "e2a7e6da-f3d1-4a40-a2ac-446617524588", 00:15:11.950 "strip_size_kb": 64, 00:15:11.950 "state": "online", 00:15:11.950 "raid_level": "raid5f", 00:15:11.950 "superblock": false, 00:15:11.950 "num_base_bdevs": 4, 00:15:11.950 "num_base_bdevs_discovered": 4, 00:15:11.950 "num_base_bdevs_operational": 4, 00:15:11.950 "process": { 00:15:11.950 "type": "rebuild", 00:15:11.950 "target": "spare", 00:15:11.950 "progress": { 00:15:11.950 "blocks": 21120, 00:15:11.950 "percent": 10 00:15:11.950 } 00:15:11.950 }, 00:15:11.950 "base_bdevs_list": [ 00:15:11.950 { 00:15:11.950 "name": "spare", 00:15:11.950 "uuid": "33dd8815-2fc6-505f-9758-6791835f4ded", 00:15:11.950 "is_configured": true, 00:15:11.950 "data_offset": 0, 00:15:11.950 "data_size": 65536 00:15:11.950 }, 00:15:11.950 { 00:15:11.950 "name": "BaseBdev2", 00:15:11.950 "uuid": "5f6f6349-55ea-586c-93a5-7b4bbaf34b53", 00:15:11.951 "is_configured": true, 00:15:11.951 "data_offset": 0, 00:15:11.951 "data_size": 65536 00:15:11.951 }, 00:15:11.951 { 00:15:11.951 "name": "BaseBdev3", 00:15:11.951 "uuid": "9fa2ed43-5581-5e48-b2af-e70cd259e847", 00:15:11.951 "is_configured": true, 00:15:11.951 "data_offset": 0, 00:15:11.951 "data_size": 65536 00:15:11.951 }, 00:15:11.951 { 00:15:11.951 "name": "BaseBdev4", 00:15:11.951 "uuid": "2a61eaec-bb70-51a5-984f-894f17000fb5", 00:15:11.951 "is_configured": true, 00:15:11.951 "data_offset": 0, 00:15:11.951 "data_size": 65536 00:15:11.951 } 00:15:11.951 ] 00:15:11.951 }' 00:15:11.951 03:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.951 03:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.951 03:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.211 03:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:12.211 03:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:13.153 03:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:13.153 03:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.153 03:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.153 03:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.153 03:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.153 03:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.153 03:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.153 03:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.153 03:15:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.153 03:15:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.153 03:15:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.153 03:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.153 "name": "raid_bdev1", 00:15:13.153 "uuid": "e2a7e6da-f3d1-4a40-a2ac-446617524588", 00:15:13.153 "strip_size_kb": 64, 00:15:13.153 "state": "online", 00:15:13.153 "raid_level": "raid5f", 00:15:13.153 "superblock": false, 00:15:13.153 "num_base_bdevs": 4, 00:15:13.153 "num_base_bdevs_discovered": 4, 00:15:13.153 "num_base_bdevs_operational": 4, 00:15:13.153 "process": { 00:15:13.153 "type": "rebuild", 00:15:13.153 "target": "spare", 00:15:13.153 "progress": { 00:15:13.153 "blocks": 42240, 00:15:13.153 "percent": 21 00:15:13.153 } 00:15:13.153 }, 00:15:13.153 "base_bdevs_list": [ 00:15:13.153 { 00:15:13.153 "name": "spare", 00:15:13.153 "uuid": "33dd8815-2fc6-505f-9758-6791835f4ded", 00:15:13.153 "is_configured": true, 00:15:13.153 "data_offset": 0, 00:15:13.153 "data_size": 65536 00:15:13.153 }, 00:15:13.153 { 00:15:13.153 "name": "BaseBdev2", 00:15:13.153 "uuid": "5f6f6349-55ea-586c-93a5-7b4bbaf34b53", 00:15:13.153 "is_configured": true, 00:15:13.153 "data_offset": 0, 00:15:13.153 "data_size": 65536 00:15:13.153 }, 00:15:13.153 { 00:15:13.153 "name": "BaseBdev3", 00:15:13.153 "uuid": "9fa2ed43-5581-5e48-b2af-e70cd259e847", 00:15:13.153 "is_configured": true, 00:15:13.153 "data_offset": 0, 00:15:13.153 "data_size": 65536 00:15:13.153 }, 00:15:13.153 { 00:15:13.153 "name": "BaseBdev4", 00:15:13.153 "uuid": "2a61eaec-bb70-51a5-984f-894f17000fb5", 00:15:13.153 "is_configured": true, 00:15:13.153 "data_offset": 0, 00:15:13.153 "data_size": 65536 00:15:13.153 } 00:15:13.153 ] 00:15:13.153 }' 00:15:13.153 03:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.153 03:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.153 03:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.153 03:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.153 03:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:14.536 03:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:14.536 03:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.536 03:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.536 03:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.536 03:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.536 03:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.536 03:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.536 03:15:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.536 03:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.536 03:15:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.536 03:15:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.536 03:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.536 "name": "raid_bdev1", 00:15:14.536 "uuid": "e2a7e6da-f3d1-4a40-a2ac-446617524588", 00:15:14.536 "strip_size_kb": 64, 00:15:14.536 "state": "online", 00:15:14.536 "raid_level": "raid5f", 00:15:14.536 "superblock": false, 00:15:14.536 "num_base_bdevs": 4, 00:15:14.536 "num_base_bdevs_discovered": 4, 00:15:14.536 "num_base_bdevs_operational": 4, 00:15:14.536 "process": { 00:15:14.536 "type": "rebuild", 00:15:14.536 "target": "spare", 00:15:14.536 "progress": { 00:15:14.536 "blocks": 63360, 00:15:14.536 "percent": 32 00:15:14.536 } 00:15:14.536 }, 00:15:14.536 "base_bdevs_list": [ 00:15:14.536 { 00:15:14.536 "name": "spare", 00:15:14.536 "uuid": "33dd8815-2fc6-505f-9758-6791835f4ded", 00:15:14.536 "is_configured": true, 00:15:14.536 "data_offset": 0, 00:15:14.536 "data_size": 65536 00:15:14.536 }, 00:15:14.536 { 00:15:14.536 "name": "BaseBdev2", 00:15:14.536 "uuid": "5f6f6349-55ea-586c-93a5-7b4bbaf34b53", 00:15:14.536 "is_configured": true, 00:15:14.536 "data_offset": 0, 00:15:14.536 "data_size": 65536 00:15:14.536 }, 00:15:14.536 { 00:15:14.536 "name": "BaseBdev3", 00:15:14.536 "uuid": "9fa2ed43-5581-5e48-b2af-e70cd259e847", 00:15:14.536 "is_configured": true, 00:15:14.536 "data_offset": 0, 00:15:14.536 "data_size": 65536 00:15:14.536 }, 00:15:14.536 { 00:15:14.536 "name": "BaseBdev4", 00:15:14.536 "uuid": "2a61eaec-bb70-51a5-984f-894f17000fb5", 00:15:14.536 "is_configured": true, 00:15:14.536 "data_offset": 0, 00:15:14.536 "data_size": 65536 00:15:14.536 } 00:15:14.536 ] 00:15:14.536 }' 00:15:14.536 03:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.536 03:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.536 03:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.536 03:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.536 03:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:15.478 03:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:15.478 03:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.478 03:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.478 03:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.478 03:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.478 03:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.478 03:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.478 03:15:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.478 03:15:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.478 03:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.478 03:15:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.478 03:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.478 "name": "raid_bdev1", 00:15:15.478 "uuid": "e2a7e6da-f3d1-4a40-a2ac-446617524588", 00:15:15.478 "strip_size_kb": 64, 00:15:15.478 "state": "online", 00:15:15.478 "raid_level": "raid5f", 00:15:15.478 "superblock": false, 00:15:15.478 "num_base_bdevs": 4, 00:15:15.478 "num_base_bdevs_discovered": 4, 00:15:15.478 "num_base_bdevs_operational": 4, 00:15:15.478 "process": { 00:15:15.478 "type": "rebuild", 00:15:15.478 "target": "spare", 00:15:15.478 "progress": { 00:15:15.478 "blocks": 84480, 00:15:15.478 "percent": 42 00:15:15.478 } 00:15:15.478 }, 00:15:15.478 "base_bdevs_list": [ 00:15:15.478 { 00:15:15.478 "name": "spare", 00:15:15.478 "uuid": "33dd8815-2fc6-505f-9758-6791835f4ded", 00:15:15.478 "is_configured": true, 00:15:15.478 "data_offset": 0, 00:15:15.478 "data_size": 65536 00:15:15.478 }, 00:15:15.478 { 00:15:15.478 "name": "BaseBdev2", 00:15:15.478 "uuid": "5f6f6349-55ea-586c-93a5-7b4bbaf34b53", 00:15:15.478 "is_configured": true, 00:15:15.478 "data_offset": 0, 00:15:15.478 "data_size": 65536 00:15:15.478 }, 00:15:15.478 { 00:15:15.478 "name": "BaseBdev3", 00:15:15.478 "uuid": "9fa2ed43-5581-5e48-b2af-e70cd259e847", 00:15:15.478 "is_configured": true, 00:15:15.478 "data_offset": 0, 00:15:15.478 "data_size": 65536 00:15:15.478 }, 00:15:15.478 { 00:15:15.478 "name": "BaseBdev4", 00:15:15.478 "uuid": "2a61eaec-bb70-51a5-984f-894f17000fb5", 00:15:15.478 "is_configured": true, 00:15:15.478 "data_offset": 0, 00:15:15.478 "data_size": 65536 00:15:15.478 } 00:15:15.478 ] 00:15:15.478 }' 00:15:15.478 03:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.478 03:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.478 03:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.478 03:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.478 03:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:16.416 03:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:16.417 03:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.417 03:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.417 03:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.417 03:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.417 03:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.417 03:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.417 03:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.417 03:15:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.417 03:15:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.417 03:15:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.417 03:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.417 "name": "raid_bdev1", 00:15:16.417 "uuid": "e2a7e6da-f3d1-4a40-a2ac-446617524588", 00:15:16.417 "strip_size_kb": 64, 00:15:16.417 "state": "online", 00:15:16.417 "raid_level": "raid5f", 00:15:16.417 "superblock": false, 00:15:16.417 "num_base_bdevs": 4, 00:15:16.417 "num_base_bdevs_discovered": 4, 00:15:16.417 "num_base_bdevs_operational": 4, 00:15:16.417 "process": { 00:15:16.417 "type": "rebuild", 00:15:16.417 "target": "spare", 00:15:16.417 "progress": { 00:15:16.417 "blocks": 105600, 00:15:16.417 "percent": 53 00:15:16.417 } 00:15:16.417 }, 00:15:16.417 "base_bdevs_list": [ 00:15:16.417 { 00:15:16.417 "name": "spare", 00:15:16.417 "uuid": "33dd8815-2fc6-505f-9758-6791835f4ded", 00:15:16.417 "is_configured": true, 00:15:16.417 "data_offset": 0, 00:15:16.417 "data_size": 65536 00:15:16.417 }, 00:15:16.417 { 00:15:16.417 "name": "BaseBdev2", 00:15:16.417 "uuid": "5f6f6349-55ea-586c-93a5-7b4bbaf34b53", 00:15:16.417 "is_configured": true, 00:15:16.417 "data_offset": 0, 00:15:16.417 "data_size": 65536 00:15:16.417 }, 00:15:16.417 { 00:15:16.417 "name": "BaseBdev3", 00:15:16.417 "uuid": "9fa2ed43-5581-5e48-b2af-e70cd259e847", 00:15:16.417 "is_configured": true, 00:15:16.417 "data_offset": 0, 00:15:16.417 "data_size": 65536 00:15:16.417 }, 00:15:16.417 { 00:15:16.417 "name": "BaseBdev4", 00:15:16.417 "uuid": "2a61eaec-bb70-51a5-984f-894f17000fb5", 00:15:16.417 "is_configured": true, 00:15:16.417 "data_offset": 0, 00:15:16.417 "data_size": 65536 00:15:16.417 } 00:15:16.417 ] 00:15:16.417 }' 00:15:16.417 03:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.677 03:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.677 03:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.677 03:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.677 03:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:17.618 03:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:17.618 03:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.618 03:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.618 03:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.618 03:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.618 03:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.618 03:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.618 03:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.618 03:15:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.618 03:15:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.618 03:15:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.618 03:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.618 "name": "raid_bdev1", 00:15:17.618 "uuid": "e2a7e6da-f3d1-4a40-a2ac-446617524588", 00:15:17.618 "strip_size_kb": 64, 00:15:17.618 "state": "online", 00:15:17.618 "raid_level": "raid5f", 00:15:17.618 "superblock": false, 00:15:17.618 "num_base_bdevs": 4, 00:15:17.618 "num_base_bdevs_discovered": 4, 00:15:17.618 "num_base_bdevs_operational": 4, 00:15:17.618 "process": { 00:15:17.618 "type": "rebuild", 00:15:17.618 "target": "spare", 00:15:17.618 "progress": { 00:15:17.618 "blocks": 128640, 00:15:17.618 "percent": 65 00:15:17.618 } 00:15:17.618 }, 00:15:17.618 "base_bdevs_list": [ 00:15:17.618 { 00:15:17.618 "name": "spare", 00:15:17.618 "uuid": "33dd8815-2fc6-505f-9758-6791835f4ded", 00:15:17.618 "is_configured": true, 00:15:17.618 "data_offset": 0, 00:15:17.618 "data_size": 65536 00:15:17.618 }, 00:15:17.618 { 00:15:17.618 "name": "BaseBdev2", 00:15:17.618 "uuid": "5f6f6349-55ea-586c-93a5-7b4bbaf34b53", 00:15:17.618 "is_configured": true, 00:15:17.618 "data_offset": 0, 00:15:17.618 "data_size": 65536 00:15:17.618 }, 00:15:17.618 { 00:15:17.618 "name": "BaseBdev3", 00:15:17.618 "uuid": "9fa2ed43-5581-5e48-b2af-e70cd259e847", 00:15:17.618 "is_configured": true, 00:15:17.618 "data_offset": 0, 00:15:17.618 "data_size": 65536 00:15:17.618 }, 00:15:17.618 { 00:15:17.618 "name": "BaseBdev4", 00:15:17.618 "uuid": "2a61eaec-bb70-51a5-984f-894f17000fb5", 00:15:17.618 "is_configured": true, 00:15:17.618 "data_offset": 0, 00:15:17.618 "data_size": 65536 00:15:17.618 } 00:15:17.618 ] 00:15:17.618 }' 00:15:17.618 03:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.618 03:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.618 03:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.878 03:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.878 03:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:18.818 03:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:18.818 03:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.818 03:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.818 03:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.818 03:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.818 03:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.818 03:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.818 03:15:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.818 03:15:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.818 03:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.818 03:15:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.818 03:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.818 "name": "raid_bdev1", 00:15:18.818 "uuid": "e2a7e6da-f3d1-4a40-a2ac-446617524588", 00:15:18.818 "strip_size_kb": 64, 00:15:18.818 "state": "online", 00:15:18.818 "raid_level": "raid5f", 00:15:18.818 "superblock": false, 00:15:18.818 "num_base_bdevs": 4, 00:15:18.818 "num_base_bdevs_discovered": 4, 00:15:18.818 "num_base_bdevs_operational": 4, 00:15:18.818 "process": { 00:15:18.818 "type": "rebuild", 00:15:18.818 "target": "spare", 00:15:18.818 "progress": { 00:15:18.818 "blocks": 149760, 00:15:18.818 "percent": 76 00:15:18.818 } 00:15:18.818 }, 00:15:18.818 "base_bdevs_list": [ 00:15:18.818 { 00:15:18.818 "name": "spare", 00:15:18.818 "uuid": "33dd8815-2fc6-505f-9758-6791835f4ded", 00:15:18.818 "is_configured": true, 00:15:18.818 "data_offset": 0, 00:15:18.818 "data_size": 65536 00:15:18.818 }, 00:15:18.818 { 00:15:18.818 "name": "BaseBdev2", 00:15:18.818 "uuid": "5f6f6349-55ea-586c-93a5-7b4bbaf34b53", 00:15:18.818 "is_configured": true, 00:15:18.818 "data_offset": 0, 00:15:18.818 "data_size": 65536 00:15:18.818 }, 00:15:18.818 { 00:15:18.818 "name": "BaseBdev3", 00:15:18.818 "uuid": "9fa2ed43-5581-5e48-b2af-e70cd259e847", 00:15:18.818 "is_configured": true, 00:15:18.818 "data_offset": 0, 00:15:18.818 "data_size": 65536 00:15:18.818 }, 00:15:18.818 { 00:15:18.818 "name": "BaseBdev4", 00:15:18.818 "uuid": "2a61eaec-bb70-51a5-984f-894f17000fb5", 00:15:18.818 "is_configured": true, 00:15:18.818 "data_offset": 0, 00:15:18.818 "data_size": 65536 00:15:18.818 } 00:15:18.818 ] 00:15:18.818 }' 00:15:18.818 03:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.818 03:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:18.818 03:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.818 03:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:18.818 03:15:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:20.197 03:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:20.197 03:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:20.197 03:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.197 03:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:20.197 03:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:20.197 03:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.197 03:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.197 03:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.197 03:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.197 03:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.197 03:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.197 03:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.197 "name": "raid_bdev1", 00:15:20.197 "uuid": "e2a7e6da-f3d1-4a40-a2ac-446617524588", 00:15:20.197 "strip_size_kb": 64, 00:15:20.197 "state": "online", 00:15:20.197 "raid_level": "raid5f", 00:15:20.197 "superblock": false, 00:15:20.197 "num_base_bdevs": 4, 00:15:20.197 "num_base_bdevs_discovered": 4, 00:15:20.197 "num_base_bdevs_operational": 4, 00:15:20.197 "process": { 00:15:20.197 "type": "rebuild", 00:15:20.197 "target": "spare", 00:15:20.197 "progress": { 00:15:20.197 "blocks": 172800, 00:15:20.197 "percent": 87 00:15:20.197 } 00:15:20.197 }, 00:15:20.197 "base_bdevs_list": [ 00:15:20.197 { 00:15:20.197 "name": "spare", 00:15:20.197 "uuid": "33dd8815-2fc6-505f-9758-6791835f4ded", 00:15:20.197 "is_configured": true, 00:15:20.197 "data_offset": 0, 00:15:20.197 "data_size": 65536 00:15:20.197 }, 00:15:20.197 { 00:15:20.197 "name": "BaseBdev2", 00:15:20.197 "uuid": "5f6f6349-55ea-586c-93a5-7b4bbaf34b53", 00:15:20.197 "is_configured": true, 00:15:20.197 "data_offset": 0, 00:15:20.197 "data_size": 65536 00:15:20.197 }, 00:15:20.197 { 00:15:20.197 "name": "BaseBdev3", 00:15:20.197 "uuid": "9fa2ed43-5581-5e48-b2af-e70cd259e847", 00:15:20.197 "is_configured": true, 00:15:20.197 "data_offset": 0, 00:15:20.197 "data_size": 65536 00:15:20.197 }, 00:15:20.197 { 00:15:20.197 "name": "BaseBdev4", 00:15:20.197 "uuid": "2a61eaec-bb70-51a5-984f-894f17000fb5", 00:15:20.197 "is_configured": true, 00:15:20.197 "data_offset": 0, 00:15:20.197 "data_size": 65536 00:15:20.197 } 00:15:20.197 ] 00:15:20.197 }' 00:15:20.197 03:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.197 03:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:20.197 03:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.197 03:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:20.197 03:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:21.139 03:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:21.139 03:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.139 03:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.139 03:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.139 03:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.139 03:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.139 03:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.139 03:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.139 03:15:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.139 03:15:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.139 03:15:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.139 03:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.139 "name": "raid_bdev1", 00:15:21.139 "uuid": "e2a7e6da-f3d1-4a40-a2ac-446617524588", 00:15:21.139 "strip_size_kb": 64, 00:15:21.139 "state": "online", 00:15:21.139 "raid_level": "raid5f", 00:15:21.139 "superblock": false, 00:15:21.139 "num_base_bdevs": 4, 00:15:21.139 "num_base_bdevs_discovered": 4, 00:15:21.139 "num_base_bdevs_operational": 4, 00:15:21.139 "process": { 00:15:21.139 "type": "rebuild", 00:15:21.139 "target": "spare", 00:15:21.139 "progress": { 00:15:21.139 "blocks": 193920, 00:15:21.139 "percent": 98 00:15:21.139 } 00:15:21.139 }, 00:15:21.139 "base_bdevs_list": [ 00:15:21.139 { 00:15:21.139 "name": "spare", 00:15:21.139 "uuid": "33dd8815-2fc6-505f-9758-6791835f4ded", 00:15:21.139 "is_configured": true, 00:15:21.139 "data_offset": 0, 00:15:21.139 "data_size": 65536 00:15:21.139 }, 00:15:21.139 { 00:15:21.139 "name": "BaseBdev2", 00:15:21.139 "uuid": "5f6f6349-55ea-586c-93a5-7b4bbaf34b53", 00:15:21.139 "is_configured": true, 00:15:21.139 "data_offset": 0, 00:15:21.139 "data_size": 65536 00:15:21.139 }, 00:15:21.139 { 00:15:21.139 "name": "BaseBdev3", 00:15:21.139 "uuid": "9fa2ed43-5581-5e48-b2af-e70cd259e847", 00:15:21.139 "is_configured": true, 00:15:21.139 "data_offset": 0, 00:15:21.139 "data_size": 65536 00:15:21.139 }, 00:15:21.139 { 00:15:21.139 "name": "BaseBdev4", 00:15:21.139 "uuid": "2a61eaec-bb70-51a5-984f-894f17000fb5", 00:15:21.139 "is_configured": true, 00:15:21.139 "data_offset": 0, 00:15:21.139 "data_size": 65536 00:15:21.139 } 00:15:21.139 ] 00:15:21.139 }' 00:15:21.139 03:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.139 03:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.139 03:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.139 03:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.139 03:15:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:21.139 [2024-11-18 03:15:24.673268] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:21.139 [2024-11-18 03:15:24.673383] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:21.139 [2024-11-18 03:15:24.673463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.081 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:22.081 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:22.081 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.081 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:22.081 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:22.081 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.081 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.081 03:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.081 03:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.081 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.081 03:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.341 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.341 "name": "raid_bdev1", 00:15:22.341 "uuid": "e2a7e6da-f3d1-4a40-a2ac-446617524588", 00:15:22.341 "strip_size_kb": 64, 00:15:22.341 "state": "online", 00:15:22.341 "raid_level": "raid5f", 00:15:22.341 "superblock": false, 00:15:22.341 "num_base_bdevs": 4, 00:15:22.341 "num_base_bdevs_discovered": 4, 00:15:22.341 "num_base_bdevs_operational": 4, 00:15:22.341 "base_bdevs_list": [ 00:15:22.341 { 00:15:22.341 "name": "spare", 00:15:22.341 "uuid": "33dd8815-2fc6-505f-9758-6791835f4ded", 00:15:22.341 "is_configured": true, 00:15:22.341 "data_offset": 0, 00:15:22.341 "data_size": 65536 00:15:22.341 }, 00:15:22.341 { 00:15:22.341 "name": "BaseBdev2", 00:15:22.341 "uuid": "5f6f6349-55ea-586c-93a5-7b4bbaf34b53", 00:15:22.341 "is_configured": true, 00:15:22.341 "data_offset": 0, 00:15:22.341 "data_size": 65536 00:15:22.341 }, 00:15:22.341 { 00:15:22.341 "name": "BaseBdev3", 00:15:22.341 "uuid": "9fa2ed43-5581-5e48-b2af-e70cd259e847", 00:15:22.341 "is_configured": true, 00:15:22.341 "data_offset": 0, 00:15:22.341 "data_size": 65536 00:15:22.341 }, 00:15:22.341 { 00:15:22.341 "name": "BaseBdev4", 00:15:22.341 "uuid": "2a61eaec-bb70-51a5-984f-894f17000fb5", 00:15:22.341 "is_configured": true, 00:15:22.341 "data_offset": 0, 00:15:22.341 "data_size": 65536 00:15:22.341 } 00:15:22.341 ] 00:15:22.341 }' 00:15:22.341 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.341 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:22.341 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.341 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:22.341 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:22.341 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:22.341 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.341 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:22.341 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:22.341 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.341 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.341 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.341 03:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.341 03:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.341 03:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.341 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.341 "name": "raid_bdev1", 00:15:22.341 "uuid": "e2a7e6da-f3d1-4a40-a2ac-446617524588", 00:15:22.341 "strip_size_kb": 64, 00:15:22.341 "state": "online", 00:15:22.341 "raid_level": "raid5f", 00:15:22.341 "superblock": false, 00:15:22.342 "num_base_bdevs": 4, 00:15:22.342 "num_base_bdevs_discovered": 4, 00:15:22.342 "num_base_bdevs_operational": 4, 00:15:22.342 "base_bdevs_list": [ 00:15:22.342 { 00:15:22.342 "name": "spare", 00:15:22.342 "uuid": "33dd8815-2fc6-505f-9758-6791835f4ded", 00:15:22.342 "is_configured": true, 00:15:22.342 "data_offset": 0, 00:15:22.342 "data_size": 65536 00:15:22.342 }, 00:15:22.342 { 00:15:22.342 "name": "BaseBdev2", 00:15:22.342 "uuid": "5f6f6349-55ea-586c-93a5-7b4bbaf34b53", 00:15:22.342 "is_configured": true, 00:15:22.342 "data_offset": 0, 00:15:22.342 "data_size": 65536 00:15:22.342 }, 00:15:22.342 { 00:15:22.342 "name": "BaseBdev3", 00:15:22.342 "uuid": "9fa2ed43-5581-5e48-b2af-e70cd259e847", 00:15:22.342 "is_configured": true, 00:15:22.342 "data_offset": 0, 00:15:22.342 "data_size": 65536 00:15:22.342 }, 00:15:22.342 { 00:15:22.342 "name": "BaseBdev4", 00:15:22.342 "uuid": "2a61eaec-bb70-51a5-984f-894f17000fb5", 00:15:22.342 "is_configured": true, 00:15:22.342 "data_offset": 0, 00:15:22.342 "data_size": 65536 00:15:22.342 } 00:15:22.342 ] 00:15:22.342 }' 00:15:22.342 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.342 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:22.342 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.342 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:22.342 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:22.342 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.342 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.342 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.342 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.342 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:22.342 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.342 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.342 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.342 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.342 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.342 03:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.342 03:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.342 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.342 03:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.602 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.602 "name": "raid_bdev1", 00:15:22.602 "uuid": "e2a7e6da-f3d1-4a40-a2ac-446617524588", 00:15:22.602 "strip_size_kb": 64, 00:15:22.602 "state": "online", 00:15:22.602 "raid_level": "raid5f", 00:15:22.602 "superblock": false, 00:15:22.602 "num_base_bdevs": 4, 00:15:22.602 "num_base_bdevs_discovered": 4, 00:15:22.602 "num_base_bdevs_operational": 4, 00:15:22.602 "base_bdevs_list": [ 00:15:22.602 { 00:15:22.602 "name": "spare", 00:15:22.602 "uuid": "33dd8815-2fc6-505f-9758-6791835f4ded", 00:15:22.602 "is_configured": true, 00:15:22.602 "data_offset": 0, 00:15:22.602 "data_size": 65536 00:15:22.602 }, 00:15:22.602 { 00:15:22.602 "name": "BaseBdev2", 00:15:22.602 "uuid": "5f6f6349-55ea-586c-93a5-7b4bbaf34b53", 00:15:22.602 "is_configured": true, 00:15:22.602 "data_offset": 0, 00:15:22.603 "data_size": 65536 00:15:22.603 }, 00:15:22.603 { 00:15:22.603 "name": "BaseBdev3", 00:15:22.603 "uuid": "9fa2ed43-5581-5e48-b2af-e70cd259e847", 00:15:22.603 "is_configured": true, 00:15:22.603 "data_offset": 0, 00:15:22.603 "data_size": 65536 00:15:22.603 }, 00:15:22.603 { 00:15:22.603 "name": "BaseBdev4", 00:15:22.603 "uuid": "2a61eaec-bb70-51a5-984f-894f17000fb5", 00:15:22.603 "is_configured": true, 00:15:22.603 "data_offset": 0, 00:15:22.603 "data_size": 65536 00:15:22.603 } 00:15:22.603 ] 00:15:22.603 }' 00:15:22.603 03:15:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.603 03:15:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.862 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:22.862 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.862 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.862 [2024-11-18 03:15:26.319855] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:22.862 [2024-11-18 03:15:26.319886] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:22.862 [2024-11-18 03:15:26.320008] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.863 [2024-11-18 03:15:26.320104] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:22.863 [2024-11-18 03:15:26.320117] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:22.863 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.863 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:22.863 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.863 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.863 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.863 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.863 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:22.863 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:22.863 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:22.863 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:22.863 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:22.863 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:22.863 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:22.863 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:22.863 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:22.863 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:22.863 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:22.863 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:22.863 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:23.122 /dev/nbd0 00:15:23.122 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:23.122 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:23.122 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:23.122 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:23.122 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:23.122 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:23.122 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:23.122 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:23.122 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:23.122 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:23.122 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:23.122 1+0 records in 00:15:23.122 1+0 records out 00:15:23.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314281 s, 13.0 MB/s 00:15:23.122 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.122 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:23.122 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.122 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:23.122 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:23.122 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:23.122 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:23.122 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:23.383 /dev/nbd1 00:15:23.383 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:23.383 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:23.383 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:23.383 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:23.383 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:23.383 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:23.383 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:23.383 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:23.383 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:23.383 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:23.383 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:23.383 1+0 records in 00:15:23.383 1+0 records out 00:15:23.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209468 s, 19.6 MB/s 00:15:23.383 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.383 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:23.383 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.383 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:23.383 03:15:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:23.383 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:23.384 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:23.384 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:23.384 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:23.384 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:23.384 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:23.384 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:23.384 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:23.384 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:23.384 03:15:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:23.644 03:15:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:23.644 03:15:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:23.644 03:15:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:23.644 03:15:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:23.644 03:15:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:23.644 03:15:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:23.644 03:15:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:23.644 03:15:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:23.644 03:15:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:23.644 03:15:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:23.904 03:15:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:23.904 03:15:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:23.904 03:15:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:23.904 03:15:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:23.904 03:15:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:23.904 03:15:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:23.904 03:15:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:23.904 03:15:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:23.904 03:15:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:23.904 03:15:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 95137 00:15:23.904 03:15:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 95137 ']' 00:15:23.904 03:15:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 95137 00:15:23.904 03:15:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:15:23.904 03:15:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:23.904 03:15:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95137 00:15:23.904 03:15:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:23.905 03:15:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:23.905 03:15:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95137' 00:15:23.905 killing process with pid 95137 00:15:23.905 03:15:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 95137 00:15:23.905 Received shutdown signal, test time was about 60.000000 seconds 00:15:23.905 00:15:23.905 Latency(us) 00:15:23.905 [2024-11-18T03:15:27.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.905 [2024-11-18T03:15:27.482Z] =================================================================================================================== 00:15:23.905 [2024-11-18T03:15:27.482Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:23.905 [2024-11-18 03:15:27.389088] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:23.905 03:15:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 95137 00:15:23.905 [2024-11-18 03:15:27.439694] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:24.165 03:15:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:24.165 00:15:24.165 real 0m17.936s 00:15:24.165 user 0m21.447s 00:15:24.165 sys 0m2.132s 00:15:24.165 03:15:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:24.165 03:15:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.165 ************************************ 00:15:24.165 END TEST raid5f_rebuild_test 00:15:24.165 ************************************ 00:15:24.165 03:15:27 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:15:24.165 03:15:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:24.165 03:15:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:24.165 03:15:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:24.165 ************************************ 00:15:24.165 START TEST raid5f_rebuild_test_sb 00:15:24.165 ************************************ 00:15:24.165 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:15:24.165 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:24.165 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:24.165 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:24.165 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:24.165 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=95631 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 95631 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 95631 ']' 00:15:24.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:24.426 03:15:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.426 [2024-11-18 03:15:27.838330] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:24.426 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:24.426 Zero copy mechanism will not be used. 00:15:24.426 [2024-11-18 03:15:27.838582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95631 ] 00:15:24.426 [2024-11-18 03:15:27.988954] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.686 [2024-11-18 03:15:28.037376] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.686 [2024-11-18 03:15:28.079470] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:24.686 [2024-11-18 03:15:28.079586] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.257 BaseBdev1_malloc 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.257 [2024-11-18 03:15:28.697486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:25.257 [2024-11-18 03:15:28.697588] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.257 [2024-11-18 03:15:28.697632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:25.257 [2024-11-18 03:15:28.697674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.257 [2024-11-18 03:15:28.699702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.257 [2024-11-18 03:15:28.699771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:25.257 BaseBdev1 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.257 BaseBdev2_malloc 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.257 [2024-11-18 03:15:28.735489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:25.257 [2024-11-18 03:15:28.735579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.257 [2024-11-18 03:15:28.735618] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:25.257 [2024-11-18 03:15:28.735645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.257 [2024-11-18 03:15:28.737652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.257 [2024-11-18 03:15:28.737721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:25.257 BaseBdev2 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.257 BaseBdev3_malloc 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.257 [2024-11-18 03:15:28.764069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:25.257 [2024-11-18 03:15:28.764155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.257 [2024-11-18 03:15:28.764184] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:25.257 [2024-11-18 03:15:28.764193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.257 [2024-11-18 03:15:28.766180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.257 [2024-11-18 03:15:28.766250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:25.257 BaseBdev3 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.257 BaseBdev4_malloc 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.257 [2024-11-18 03:15:28.788600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:25.257 [2024-11-18 03:15:28.788696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.257 [2024-11-18 03:15:28.788739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:25.257 [2024-11-18 03:15:28.788767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.257 [2024-11-18 03:15:28.790777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.257 [2024-11-18 03:15:28.790845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:25.257 BaseBdev4 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.257 spare_malloc 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.257 spare_delay 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.257 [2024-11-18 03:15:28.821128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:25.257 [2024-11-18 03:15:28.821229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.257 [2024-11-18 03:15:28.821270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:25.257 [2024-11-18 03:15:28.821299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.257 [2024-11-18 03:15:28.823327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.257 [2024-11-18 03:15:28.823397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:25.257 spare 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.257 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:25.258 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.258 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.258 [2024-11-18 03:15:28.829205] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:25.258 [2024-11-18 03:15:28.830997] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:25.258 [2024-11-18 03:15:28.831096] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:25.258 [2024-11-18 03:15:28.831155] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:25.258 [2024-11-18 03:15:28.831356] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:25.258 [2024-11-18 03:15:28.831399] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:25.518 [2024-11-18 03:15:28.831649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:25.518 [2024-11-18 03:15:28.832130] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:25.518 [2024-11-18 03:15:28.832154] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:25.518 [2024-11-18 03:15:28.832277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.518 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.518 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:25.518 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.518 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.518 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.518 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.518 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.518 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.518 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.518 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.518 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.518 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.518 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.518 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.518 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.518 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.518 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.518 "name": "raid_bdev1", 00:15:25.518 "uuid": "5f142baf-0aba-4e50-8f95-2f74118e9252", 00:15:25.518 "strip_size_kb": 64, 00:15:25.518 "state": "online", 00:15:25.518 "raid_level": "raid5f", 00:15:25.518 "superblock": true, 00:15:25.518 "num_base_bdevs": 4, 00:15:25.518 "num_base_bdevs_discovered": 4, 00:15:25.518 "num_base_bdevs_operational": 4, 00:15:25.518 "base_bdevs_list": [ 00:15:25.518 { 00:15:25.518 "name": "BaseBdev1", 00:15:25.518 "uuid": "33b908c9-b28b-5b5f-b8b1-c78458a46492", 00:15:25.518 "is_configured": true, 00:15:25.518 "data_offset": 2048, 00:15:25.518 "data_size": 63488 00:15:25.518 }, 00:15:25.518 { 00:15:25.518 "name": "BaseBdev2", 00:15:25.518 "uuid": "8ab30100-4d2e-5b59-8c90-65f92e020d3f", 00:15:25.518 "is_configured": true, 00:15:25.518 "data_offset": 2048, 00:15:25.518 "data_size": 63488 00:15:25.518 }, 00:15:25.518 { 00:15:25.518 "name": "BaseBdev3", 00:15:25.518 "uuid": "6873d50b-d891-5eab-a99a-5b332f463d03", 00:15:25.518 "is_configured": true, 00:15:25.518 "data_offset": 2048, 00:15:25.518 "data_size": 63488 00:15:25.518 }, 00:15:25.518 { 00:15:25.518 "name": "BaseBdev4", 00:15:25.518 "uuid": "0186723a-899c-5d44-a6b2-477f86e44dec", 00:15:25.518 "is_configured": true, 00:15:25.518 "data_offset": 2048, 00:15:25.518 "data_size": 63488 00:15:25.518 } 00:15:25.518 ] 00:15:25.518 }' 00:15:25.518 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.518 03:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.781 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:25.781 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:25.781 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.781 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.781 [2024-11-18 03:15:29.257500] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:25.781 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.781 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:15:25.781 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.781 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.781 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.781 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:25.781 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.781 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:25.781 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:25.781 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:25.781 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:25.781 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:25.781 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:25.781 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:25.781 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:25.781 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:25.781 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:25.781 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:25.781 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:25.781 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:25.781 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:26.043 [2024-11-18 03:15:29.540871] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:26.043 /dev/nbd0 00:15:26.043 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:26.043 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:26.043 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:26.043 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:26.043 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:26.043 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:26.043 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:26.043 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:26.043 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:26.043 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:26.043 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:26.043 1+0 records in 00:15:26.043 1+0 records out 00:15:26.043 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209262 s, 19.6 MB/s 00:15:26.043 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:26.043 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:26.043 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:26.043 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:26.043 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:26.043 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:26.043 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:26.043 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:26.043 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:26.043 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:26.043 03:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:15:26.611 496+0 records in 00:15:26.611 496+0 records out 00:15:26.611 97517568 bytes (98 MB, 93 MiB) copied, 0.423022 s, 231 MB/s 00:15:26.611 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:26.611 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:26.611 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:26.611 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:26.611 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:26.611 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:26.611 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:26.871 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:26.871 [2024-11-18 03:15:30.240139] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.872 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:26.872 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:26.872 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:26.872 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:26.872 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:26.872 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:26.872 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:26.872 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:26.872 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.872 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.872 [2024-11-18 03:15:30.261776] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:26.872 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.872 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:26.872 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.872 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.872 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.872 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.872 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.872 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.872 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.872 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.872 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.872 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.872 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.872 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.872 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.872 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.872 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.872 "name": "raid_bdev1", 00:15:26.872 "uuid": "5f142baf-0aba-4e50-8f95-2f74118e9252", 00:15:26.872 "strip_size_kb": 64, 00:15:26.872 "state": "online", 00:15:26.872 "raid_level": "raid5f", 00:15:26.872 "superblock": true, 00:15:26.872 "num_base_bdevs": 4, 00:15:26.872 "num_base_bdevs_discovered": 3, 00:15:26.872 "num_base_bdevs_operational": 3, 00:15:26.872 "base_bdevs_list": [ 00:15:26.872 { 00:15:26.872 "name": null, 00:15:26.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.872 "is_configured": false, 00:15:26.872 "data_offset": 0, 00:15:26.872 "data_size": 63488 00:15:26.872 }, 00:15:26.872 { 00:15:26.872 "name": "BaseBdev2", 00:15:26.872 "uuid": "8ab30100-4d2e-5b59-8c90-65f92e020d3f", 00:15:26.872 "is_configured": true, 00:15:26.872 "data_offset": 2048, 00:15:26.872 "data_size": 63488 00:15:26.872 }, 00:15:26.872 { 00:15:26.872 "name": "BaseBdev3", 00:15:26.872 "uuid": "6873d50b-d891-5eab-a99a-5b332f463d03", 00:15:26.872 "is_configured": true, 00:15:26.872 "data_offset": 2048, 00:15:26.872 "data_size": 63488 00:15:26.872 }, 00:15:26.872 { 00:15:26.872 "name": "BaseBdev4", 00:15:26.872 "uuid": "0186723a-899c-5d44-a6b2-477f86e44dec", 00:15:26.872 "is_configured": true, 00:15:26.872 "data_offset": 2048, 00:15:26.872 "data_size": 63488 00:15:26.872 } 00:15:26.872 ] 00:15:26.872 }' 00:15:26.872 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.872 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.132 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:27.132 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.132 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.132 [2024-11-18 03:15:30.689121] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:27.132 [2024-11-18 03:15:30.692658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a8b0 00:15:27.132 [2024-11-18 03:15:30.695086] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:27.132 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.132 03:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.571 "name": "raid_bdev1", 00:15:28.571 "uuid": "5f142baf-0aba-4e50-8f95-2f74118e9252", 00:15:28.571 "strip_size_kb": 64, 00:15:28.571 "state": "online", 00:15:28.571 "raid_level": "raid5f", 00:15:28.571 "superblock": true, 00:15:28.571 "num_base_bdevs": 4, 00:15:28.571 "num_base_bdevs_discovered": 4, 00:15:28.571 "num_base_bdevs_operational": 4, 00:15:28.571 "process": { 00:15:28.571 "type": "rebuild", 00:15:28.571 "target": "spare", 00:15:28.571 "progress": { 00:15:28.571 "blocks": 19200, 00:15:28.571 "percent": 10 00:15:28.571 } 00:15:28.571 }, 00:15:28.571 "base_bdevs_list": [ 00:15:28.571 { 00:15:28.571 "name": "spare", 00:15:28.571 "uuid": "f52bf927-0f02-5bb2-bc07-bc2a631d2a4a", 00:15:28.571 "is_configured": true, 00:15:28.571 "data_offset": 2048, 00:15:28.571 "data_size": 63488 00:15:28.571 }, 00:15:28.571 { 00:15:28.571 "name": "BaseBdev2", 00:15:28.571 "uuid": "8ab30100-4d2e-5b59-8c90-65f92e020d3f", 00:15:28.571 "is_configured": true, 00:15:28.571 "data_offset": 2048, 00:15:28.571 "data_size": 63488 00:15:28.571 }, 00:15:28.571 { 00:15:28.571 "name": "BaseBdev3", 00:15:28.571 "uuid": "6873d50b-d891-5eab-a99a-5b332f463d03", 00:15:28.571 "is_configured": true, 00:15:28.571 "data_offset": 2048, 00:15:28.571 "data_size": 63488 00:15:28.571 }, 00:15:28.571 { 00:15:28.571 "name": "BaseBdev4", 00:15:28.571 "uuid": "0186723a-899c-5d44-a6b2-477f86e44dec", 00:15:28.571 "is_configured": true, 00:15:28.571 "data_offset": 2048, 00:15:28.571 "data_size": 63488 00:15:28.571 } 00:15:28.571 ] 00:15:28.571 }' 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.571 [2024-11-18 03:15:31.854302] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:28.571 [2024-11-18 03:15:31.901113] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:28.571 [2024-11-18 03:15:31.901221] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.571 [2024-11-18 03:15:31.901260] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:28.571 [2024-11-18 03:15:31.901284] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.571 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.572 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.572 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.572 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.572 "name": "raid_bdev1", 00:15:28.572 "uuid": "5f142baf-0aba-4e50-8f95-2f74118e9252", 00:15:28.572 "strip_size_kb": 64, 00:15:28.572 "state": "online", 00:15:28.572 "raid_level": "raid5f", 00:15:28.572 "superblock": true, 00:15:28.572 "num_base_bdevs": 4, 00:15:28.572 "num_base_bdevs_discovered": 3, 00:15:28.572 "num_base_bdevs_operational": 3, 00:15:28.572 "base_bdevs_list": [ 00:15:28.572 { 00:15:28.572 "name": null, 00:15:28.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.572 "is_configured": false, 00:15:28.572 "data_offset": 0, 00:15:28.572 "data_size": 63488 00:15:28.572 }, 00:15:28.572 { 00:15:28.572 "name": "BaseBdev2", 00:15:28.572 "uuid": "8ab30100-4d2e-5b59-8c90-65f92e020d3f", 00:15:28.572 "is_configured": true, 00:15:28.572 "data_offset": 2048, 00:15:28.572 "data_size": 63488 00:15:28.572 }, 00:15:28.572 { 00:15:28.572 "name": "BaseBdev3", 00:15:28.572 "uuid": "6873d50b-d891-5eab-a99a-5b332f463d03", 00:15:28.572 "is_configured": true, 00:15:28.572 "data_offset": 2048, 00:15:28.572 "data_size": 63488 00:15:28.572 }, 00:15:28.572 { 00:15:28.572 "name": "BaseBdev4", 00:15:28.572 "uuid": "0186723a-899c-5d44-a6b2-477f86e44dec", 00:15:28.572 "is_configured": true, 00:15:28.572 "data_offset": 2048, 00:15:28.572 "data_size": 63488 00:15:28.572 } 00:15:28.572 ] 00:15:28.572 }' 00:15:28.572 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.572 03:15:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.831 03:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:28.831 03:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.831 03:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:28.832 03:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:28.832 03:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.832 03:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.832 03:15:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.832 03:15:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.832 03:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.832 03:15:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.832 03:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.832 "name": "raid_bdev1", 00:15:28.832 "uuid": "5f142baf-0aba-4e50-8f95-2f74118e9252", 00:15:28.832 "strip_size_kb": 64, 00:15:28.832 "state": "online", 00:15:28.832 "raid_level": "raid5f", 00:15:28.832 "superblock": true, 00:15:28.832 "num_base_bdevs": 4, 00:15:28.832 "num_base_bdevs_discovered": 3, 00:15:28.832 "num_base_bdevs_operational": 3, 00:15:28.832 "base_bdevs_list": [ 00:15:28.832 { 00:15:28.832 "name": null, 00:15:28.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.832 "is_configured": false, 00:15:28.832 "data_offset": 0, 00:15:28.832 "data_size": 63488 00:15:28.832 }, 00:15:28.832 { 00:15:28.832 "name": "BaseBdev2", 00:15:28.832 "uuid": "8ab30100-4d2e-5b59-8c90-65f92e020d3f", 00:15:28.832 "is_configured": true, 00:15:28.832 "data_offset": 2048, 00:15:28.832 "data_size": 63488 00:15:28.832 }, 00:15:28.832 { 00:15:28.832 "name": "BaseBdev3", 00:15:28.832 "uuid": "6873d50b-d891-5eab-a99a-5b332f463d03", 00:15:28.832 "is_configured": true, 00:15:28.832 "data_offset": 2048, 00:15:28.832 "data_size": 63488 00:15:28.832 }, 00:15:28.832 { 00:15:28.832 "name": "BaseBdev4", 00:15:28.832 "uuid": "0186723a-899c-5d44-a6b2-477f86e44dec", 00:15:28.832 "is_configured": true, 00:15:28.832 "data_offset": 2048, 00:15:28.832 "data_size": 63488 00:15:28.832 } 00:15:28.832 ] 00:15:28.832 }' 00:15:28.832 03:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.091 03:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:29.091 03:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.091 03:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:29.092 03:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:29.092 03:15:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.092 03:15:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.092 [2024-11-18 03:15:32.485659] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:29.092 [2024-11-18 03:15:32.488979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a980 00:15:29.092 [2024-11-18 03:15:32.491218] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:29.092 03:15:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.092 03:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:30.041 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.041 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.041 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.041 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.041 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.041 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.041 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.041 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.041 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.041 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.041 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.041 "name": "raid_bdev1", 00:15:30.041 "uuid": "5f142baf-0aba-4e50-8f95-2f74118e9252", 00:15:30.041 "strip_size_kb": 64, 00:15:30.041 "state": "online", 00:15:30.041 "raid_level": "raid5f", 00:15:30.041 "superblock": true, 00:15:30.041 "num_base_bdevs": 4, 00:15:30.041 "num_base_bdevs_discovered": 4, 00:15:30.041 "num_base_bdevs_operational": 4, 00:15:30.041 "process": { 00:15:30.041 "type": "rebuild", 00:15:30.041 "target": "spare", 00:15:30.041 "progress": { 00:15:30.041 "blocks": 19200, 00:15:30.041 "percent": 10 00:15:30.041 } 00:15:30.041 }, 00:15:30.041 "base_bdevs_list": [ 00:15:30.041 { 00:15:30.041 "name": "spare", 00:15:30.041 "uuid": "f52bf927-0f02-5bb2-bc07-bc2a631d2a4a", 00:15:30.041 "is_configured": true, 00:15:30.041 "data_offset": 2048, 00:15:30.041 "data_size": 63488 00:15:30.041 }, 00:15:30.041 { 00:15:30.041 "name": "BaseBdev2", 00:15:30.041 "uuid": "8ab30100-4d2e-5b59-8c90-65f92e020d3f", 00:15:30.041 "is_configured": true, 00:15:30.041 "data_offset": 2048, 00:15:30.041 "data_size": 63488 00:15:30.041 }, 00:15:30.041 { 00:15:30.041 "name": "BaseBdev3", 00:15:30.041 "uuid": "6873d50b-d891-5eab-a99a-5b332f463d03", 00:15:30.041 "is_configured": true, 00:15:30.041 "data_offset": 2048, 00:15:30.041 "data_size": 63488 00:15:30.041 }, 00:15:30.041 { 00:15:30.041 "name": "BaseBdev4", 00:15:30.041 "uuid": "0186723a-899c-5d44-a6b2-477f86e44dec", 00:15:30.041 "is_configured": true, 00:15:30.041 "data_offset": 2048, 00:15:30.041 "data_size": 63488 00:15:30.041 } 00:15:30.041 ] 00:15:30.041 }' 00:15:30.041 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.041 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.041 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.324 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.324 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:30.324 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:30.324 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:30.324 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:30.324 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:30.324 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=527 00:15:30.324 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:30.324 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.324 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.324 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.324 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.324 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.324 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.324 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.324 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.324 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.324 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.324 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.324 "name": "raid_bdev1", 00:15:30.324 "uuid": "5f142baf-0aba-4e50-8f95-2f74118e9252", 00:15:30.324 "strip_size_kb": 64, 00:15:30.324 "state": "online", 00:15:30.324 "raid_level": "raid5f", 00:15:30.324 "superblock": true, 00:15:30.324 "num_base_bdevs": 4, 00:15:30.324 "num_base_bdevs_discovered": 4, 00:15:30.324 "num_base_bdevs_operational": 4, 00:15:30.324 "process": { 00:15:30.324 "type": "rebuild", 00:15:30.324 "target": "spare", 00:15:30.324 "progress": { 00:15:30.324 "blocks": 21120, 00:15:30.324 "percent": 11 00:15:30.324 } 00:15:30.324 }, 00:15:30.324 "base_bdevs_list": [ 00:15:30.324 { 00:15:30.324 "name": "spare", 00:15:30.324 "uuid": "f52bf927-0f02-5bb2-bc07-bc2a631d2a4a", 00:15:30.324 "is_configured": true, 00:15:30.324 "data_offset": 2048, 00:15:30.324 "data_size": 63488 00:15:30.324 }, 00:15:30.324 { 00:15:30.324 "name": "BaseBdev2", 00:15:30.324 "uuid": "8ab30100-4d2e-5b59-8c90-65f92e020d3f", 00:15:30.324 "is_configured": true, 00:15:30.324 "data_offset": 2048, 00:15:30.324 "data_size": 63488 00:15:30.324 }, 00:15:30.324 { 00:15:30.324 "name": "BaseBdev3", 00:15:30.324 "uuid": "6873d50b-d891-5eab-a99a-5b332f463d03", 00:15:30.324 "is_configured": true, 00:15:30.324 "data_offset": 2048, 00:15:30.324 "data_size": 63488 00:15:30.324 }, 00:15:30.324 { 00:15:30.324 "name": "BaseBdev4", 00:15:30.324 "uuid": "0186723a-899c-5d44-a6b2-477f86e44dec", 00:15:30.324 "is_configured": true, 00:15:30.324 "data_offset": 2048, 00:15:30.324 "data_size": 63488 00:15:30.324 } 00:15:30.324 ] 00:15:30.324 }' 00:15:30.324 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.324 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.324 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.324 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.324 03:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:31.280 03:15:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:31.280 03:15:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.280 03:15:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.280 03:15:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.280 03:15:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.280 03:15:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.280 03:15:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.280 03:15:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.280 03:15:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.280 03:15:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.280 03:15:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.280 03:15:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.280 "name": "raid_bdev1", 00:15:31.280 "uuid": "5f142baf-0aba-4e50-8f95-2f74118e9252", 00:15:31.280 "strip_size_kb": 64, 00:15:31.280 "state": "online", 00:15:31.280 "raid_level": "raid5f", 00:15:31.280 "superblock": true, 00:15:31.280 "num_base_bdevs": 4, 00:15:31.280 "num_base_bdevs_discovered": 4, 00:15:31.280 "num_base_bdevs_operational": 4, 00:15:31.280 "process": { 00:15:31.280 "type": "rebuild", 00:15:31.280 "target": "spare", 00:15:31.280 "progress": { 00:15:31.280 "blocks": 42240, 00:15:31.280 "percent": 22 00:15:31.280 } 00:15:31.280 }, 00:15:31.280 "base_bdevs_list": [ 00:15:31.280 { 00:15:31.280 "name": "spare", 00:15:31.280 "uuid": "f52bf927-0f02-5bb2-bc07-bc2a631d2a4a", 00:15:31.280 "is_configured": true, 00:15:31.280 "data_offset": 2048, 00:15:31.280 "data_size": 63488 00:15:31.280 }, 00:15:31.280 { 00:15:31.280 "name": "BaseBdev2", 00:15:31.280 "uuid": "8ab30100-4d2e-5b59-8c90-65f92e020d3f", 00:15:31.280 "is_configured": true, 00:15:31.280 "data_offset": 2048, 00:15:31.280 "data_size": 63488 00:15:31.280 }, 00:15:31.280 { 00:15:31.280 "name": "BaseBdev3", 00:15:31.280 "uuid": "6873d50b-d891-5eab-a99a-5b332f463d03", 00:15:31.280 "is_configured": true, 00:15:31.280 "data_offset": 2048, 00:15:31.280 "data_size": 63488 00:15:31.280 }, 00:15:31.280 { 00:15:31.280 "name": "BaseBdev4", 00:15:31.280 "uuid": "0186723a-899c-5d44-a6b2-477f86e44dec", 00:15:31.280 "is_configured": true, 00:15:31.280 "data_offset": 2048, 00:15:31.280 "data_size": 63488 00:15:31.280 } 00:15:31.280 ] 00:15:31.280 }' 00:15:31.280 03:15:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.540 03:15:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:31.540 03:15:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.540 03:15:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:31.540 03:15:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:32.481 03:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:32.481 03:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.481 03:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.481 03:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.481 03:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.481 03:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.481 03:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.481 03:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.481 03:15:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.481 03:15:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.481 03:15:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.481 03:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.481 "name": "raid_bdev1", 00:15:32.481 "uuid": "5f142baf-0aba-4e50-8f95-2f74118e9252", 00:15:32.481 "strip_size_kb": 64, 00:15:32.481 "state": "online", 00:15:32.481 "raid_level": "raid5f", 00:15:32.481 "superblock": true, 00:15:32.481 "num_base_bdevs": 4, 00:15:32.481 "num_base_bdevs_discovered": 4, 00:15:32.481 "num_base_bdevs_operational": 4, 00:15:32.481 "process": { 00:15:32.481 "type": "rebuild", 00:15:32.481 "target": "spare", 00:15:32.481 "progress": { 00:15:32.481 "blocks": 65280, 00:15:32.481 "percent": 34 00:15:32.481 } 00:15:32.481 }, 00:15:32.481 "base_bdevs_list": [ 00:15:32.481 { 00:15:32.481 "name": "spare", 00:15:32.481 "uuid": "f52bf927-0f02-5bb2-bc07-bc2a631d2a4a", 00:15:32.481 "is_configured": true, 00:15:32.481 "data_offset": 2048, 00:15:32.481 "data_size": 63488 00:15:32.481 }, 00:15:32.481 { 00:15:32.481 "name": "BaseBdev2", 00:15:32.481 "uuid": "8ab30100-4d2e-5b59-8c90-65f92e020d3f", 00:15:32.481 "is_configured": true, 00:15:32.481 "data_offset": 2048, 00:15:32.481 "data_size": 63488 00:15:32.481 }, 00:15:32.481 { 00:15:32.481 "name": "BaseBdev3", 00:15:32.481 "uuid": "6873d50b-d891-5eab-a99a-5b332f463d03", 00:15:32.481 "is_configured": true, 00:15:32.481 "data_offset": 2048, 00:15:32.481 "data_size": 63488 00:15:32.481 }, 00:15:32.481 { 00:15:32.481 "name": "BaseBdev4", 00:15:32.481 "uuid": "0186723a-899c-5d44-a6b2-477f86e44dec", 00:15:32.481 "is_configured": true, 00:15:32.481 "data_offset": 2048, 00:15:32.481 "data_size": 63488 00:15:32.481 } 00:15:32.481 ] 00:15:32.481 }' 00:15:32.482 03:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.482 03:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.482 03:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.482 03:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.482 03:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:33.865 03:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:33.865 03:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:33.865 03:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.865 03:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:33.865 03:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:33.865 03:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.865 03:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.865 03:15:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.865 03:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.865 03:15:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.865 03:15:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.865 03:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.865 "name": "raid_bdev1", 00:15:33.865 "uuid": "5f142baf-0aba-4e50-8f95-2f74118e9252", 00:15:33.865 "strip_size_kb": 64, 00:15:33.865 "state": "online", 00:15:33.865 "raid_level": "raid5f", 00:15:33.865 "superblock": true, 00:15:33.865 "num_base_bdevs": 4, 00:15:33.865 "num_base_bdevs_discovered": 4, 00:15:33.865 "num_base_bdevs_operational": 4, 00:15:33.865 "process": { 00:15:33.865 "type": "rebuild", 00:15:33.865 "target": "spare", 00:15:33.865 "progress": { 00:15:33.865 "blocks": 86400, 00:15:33.865 "percent": 45 00:15:33.865 } 00:15:33.865 }, 00:15:33.865 "base_bdevs_list": [ 00:15:33.865 { 00:15:33.865 "name": "spare", 00:15:33.865 "uuid": "f52bf927-0f02-5bb2-bc07-bc2a631d2a4a", 00:15:33.865 "is_configured": true, 00:15:33.865 "data_offset": 2048, 00:15:33.865 "data_size": 63488 00:15:33.865 }, 00:15:33.865 { 00:15:33.865 "name": "BaseBdev2", 00:15:33.865 "uuid": "8ab30100-4d2e-5b59-8c90-65f92e020d3f", 00:15:33.865 "is_configured": true, 00:15:33.865 "data_offset": 2048, 00:15:33.865 "data_size": 63488 00:15:33.865 }, 00:15:33.865 { 00:15:33.865 "name": "BaseBdev3", 00:15:33.865 "uuid": "6873d50b-d891-5eab-a99a-5b332f463d03", 00:15:33.865 "is_configured": true, 00:15:33.865 "data_offset": 2048, 00:15:33.865 "data_size": 63488 00:15:33.865 }, 00:15:33.865 { 00:15:33.865 "name": "BaseBdev4", 00:15:33.865 "uuid": "0186723a-899c-5d44-a6b2-477f86e44dec", 00:15:33.865 "is_configured": true, 00:15:33.865 "data_offset": 2048, 00:15:33.866 "data_size": 63488 00:15:33.866 } 00:15:33.866 ] 00:15:33.866 }' 00:15:33.866 03:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.866 03:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:33.866 03:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.866 03:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:33.866 03:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:34.806 03:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:34.806 03:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:34.806 03:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.806 03:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:34.806 03:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:34.806 03:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.806 03:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.806 03:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.806 03:15:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.806 03:15:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.806 03:15:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.806 03:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.806 "name": "raid_bdev1", 00:15:34.806 "uuid": "5f142baf-0aba-4e50-8f95-2f74118e9252", 00:15:34.806 "strip_size_kb": 64, 00:15:34.806 "state": "online", 00:15:34.806 "raid_level": "raid5f", 00:15:34.806 "superblock": true, 00:15:34.806 "num_base_bdevs": 4, 00:15:34.806 "num_base_bdevs_discovered": 4, 00:15:34.806 "num_base_bdevs_operational": 4, 00:15:34.806 "process": { 00:15:34.806 "type": "rebuild", 00:15:34.806 "target": "spare", 00:15:34.806 "progress": { 00:15:34.806 "blocks": 107520, 00:15:34.806 "percent": 56 00:15:34.806 } 00:15:34.806 }, 00:15:34.806 "base_bdevs_list": [ 00:15:34.806 { 00:15:34.806 "name": "spare", 00:15:34.806 "uuid": "f52bf927-0f02-5bb2-bc07-bc2a631d2a4a", 00:15:34.806 "is_configured": true, 00:15:34.806 "data_offset": 2048, 00:15:34.806 "data_size": 63488 00:15:34.806 }, 00:15:34.806 { 00:15:34.806 "name": "BaseBdev2", 00:15:34.806 "uuid": "8ab30100-4d2e-5b59-8c90-65f92e020d3f", 00:15:34.806 "is_configured": true, 00:15:34.806 "data_offset": 2048, 00:15:34.806 "data_size": 63488 00:15:34.806 }, 00:15:34.806 { 00:15:34.806 "name": "BaseBdev3", 00:15:34.806 "uuid": "6873d50b-d891-5eab-a99a-5b332f463d03", 00:15:34.806 "is_configured": true, 00:15:34.806 "data_offset": 2048, 00:15:34.806 "data_size": 63488 00:15:34.806 }, 00:15:34.806 { 00:15:34.806 "name": "BaseBdev4", 00:15:34.806 "uuid": "0186723a-899c-5d44-a6b2-477f86e44dec", 00:15:34.806 "is_configured": true, 00:15:34.806 "data_offset": 2048, 00:15:34.806 "data_size": 63488 00:15:34.806 } 00:15:34.806 ] 00:15:34.806 }' 00:15:34.806 03:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.806 03:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:34.807 03:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.807 03:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.807 03:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:36.192 03:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:36.192 03:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.192 03:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.192 03:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.192 03:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.192 03:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.192 03:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.192 03:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.192 03:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.192 03:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.192 03:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.192 03:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.192 "name": "raid_bdev1", 00:15:36.192 "uuid": "5f142baf-0aba-4e50-8f95-2f74118e9252", 00:15:36.192 "strip_size_kb": 64, 00:15:36.192 "state": "online", 00:15:36.192 "raid_level": "raid5f", 00:15:36.192 "superblock": true, 00:15:36.192 "num_base_bdevs": 4, 00:15:36.192 "num_base_bdevs_discovered": 4, 00:15:36.192 "num_base_bdevs_operational": 4, 00:15:36.192 "process": { 00:15:36.192 "type": "rebuild", 00:15:36.192 "target": "spare", 00:15:36.192 "progress": { 00:15:36.192 "blocks": 130560, 00:15:36.192 "percent": 68 00:15:36.192 } 00:15:36.192 }, 00:15:36.192 "base_bdevs_list": [ 00:15:36.192 { 00:15:36.192 "name": "spare", 00:15:36.192 "uuid": "f52bf927-0f02-5bb2-bc07-bc2a631d2a4a", 00:15:36.192 "is_configured": true, 00:15:36.192 "data_offset": 2048, 00:15:36.192 "data_size": 63488 00:15:36.192 }, 00:15:36.192 { 00:15:36.192 "name": "BaseBdev2", 00:15:36.192 "uuid": "8ab30100-4d2e-5b59-8c90-65f92e020d3f", 00:15:36.192 "is_configured": true, 00:15:36.192 "data_offset": 2048, 00:15:36.192 "data_size": 63488 00:15:36.192 }, 00:15:36.192 { 00:15:36.192 "name": "BaseBdev3", 00:15:36.192 "uuid": "6873d50b-d891-5eab-a99a-5b332f463d03", 00:15:36.192 "is_configured": true, 00:15:36.192 "data_offset": 2048, 00:15:36.192 "data_size": 63488 00:15:36.192 }, 00:15:36.192 { 00:15:36.192 "name": "BaseBdev4", 00:15:36.192 "uuid": "0186723a-899c-5d44-a6b2-477f86e44dec", 00:15:36.192 "is_configured": true, 00:15:36.192 "data_offset": 2048, 00:15:36.192 "data_size": 63488 00:15:36.192 } 00:15:36.192 ] 00:15:36.192 }' 00:15:36.192 03:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.192 03:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.192 03:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.192 03:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.192 03:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:37.133 03:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:37.133 03:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.133 03:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.133 03:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.133 03:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.133 03:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.133 03:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.133 03:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.133 03:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.133 03:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.133 03:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.133 03:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.133 "name": "raid_bdev1", 00:15:37.133 "uuid": "5f142baf-0aba-4e50-8f95-2f74118e9252", 00:15:37.133 "strip_size_kb": 64, 00:15:37.133 "state": "online", 00:15:37.133 "raid_level": "raid5f", 00:15:37.133 "superblock": true, 00:15:37.133 "num_base_bdevs": 4, 00:15:37.133 "num_base_bdevs_discovered": 4, 00:15:37.133 "num_base_bdevs_operational": 4, 00:15:37.133 "process": { 00:15:37.133 "type": "rebuild", 00:15:37.133 "target": "spare", 00:15:37.133 "progress": { 00:15:37.133 "blocks": 151680, 00:15:37.133 "percent": 79 00:15:37.133 } 00:15:37.133 }, 00:15:37.133 "base_bdevs_list": [ 00:15:37.133 { 00:15:37.133 "name": "spare", 00:15:37.133 "uuid": "f52bf927-0f02-5bb2-bc07-bc2a631d2a4a", 00:15:37.133 "is_configured": true, 00:15:37.133 "data_offset": 2048, 00:15:37.133 "data_size": 63488 00:15:37.133 }, 00:15:37.133 { 00:15:37.133 "name": "BaseBdev2", 00:15:37.133 "uuid": "8ab30100-4d2e-5b59-8c90-65f92e020d3f", 00:15:37.133 "is_configured": true, 00:15:37.133 "data_offset": 2048, 00:15:37.133 "data_size": 63488 00:15:37.133 }, 00:15:37.133 { 00:15:37.133 "name": "BaseBdev3", 00:15:37.133 "uuid": "6873d50b-d891-5eab-a99a-5b332f463d03", 00:15:37.133 "is_configured": true, 00:15:37.133 "data_offset": 2048, 00:15:37.133 "data_size": 63488 00:15:37.133 }, 00:15:37.133 { 00:15:37.133 "name": "BaseBdev4", 00:15:37.133 "uuid": "0186723a-899c-5d44-a6b2-477f86e44dec", 00:15:37.133 "is_configured": true, 00:15:37.133 "data_offset": 2048, 00:15:37.133 "data_size": 63488 00:15:37.133 } 00:15:37.133 ] 00:15:37.133 }' 00:15:37.133 03:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.133 03:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:37.133 03:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.133 03:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.133 03:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:38.074 03:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:38.074 03:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:38.074 03:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.074 03:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:38.074 03:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:38.074 03:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.334 03:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.334 03:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.334 03:15:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.334 03:15:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.334 03:15:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.334 03:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.335 "name": "raid_bdev1", 00:15:38.335 "uuid": "5f142baf-0aba-4e50-8f95-2f74118e9252", 00:15:38.335 "strip_size_kb": 64, 00:15:38.335 "state": "online", 00:15:38.335 "raid_level": "raid5f", 00:15:38.335 "superblock": true, 00:15:38.335 "num_base_bdevs": 4, 00:15:38.335 "num_base_bdevs_discovered": 4, 00:15:38.335 "num_base_bdevs_operational": 4, 00:15:38.335 "process": { 00:15:38.335 "type": "rebuild", 00:15:38.335 "target": "spare", 00:15:38.335 "progress": { 00:15:38.335 "blocks": 174720, 00:15:38.335 "percent": 91 00:15:38.335 } 00:15:38.335 }, 00:15:38.335 "base_bdevs_list": [ 00:15:38.335 { 00:15:38.335 "name": "spare", 00:15:38.335 "uuid": "f52bf927-0f02-5bb2-bc07-bc2a631d2a4a", 00:15:38.335 "is_configured": true, 00:15:38.335 "data_offset": 2048, 00:15:38.335 "data_size": 63488 00:15:38.335 }, 00:15:38.335 { 00:15:38.335 "name": "BaseBdev2", 00:15:38.335 "uuid": "8ab30100-4d2e-5b59-8c90-65f92e020d3f", 00:15:38.335 "is_configured": true, 00:15:38.335 "data_offset": 2048, 00:15:38.335 "data_size": 63488 00:15:38.335 }, 00:15:38.335 { 00:15:38.335 "name": "BaseBdev3", 00:15:38.335 "uuid": "6873d50b-d891-5eab-a99a-5b332f463d03", 00:15:38.335 "is_configured": true, 00:15:38.335 "data_offset": 2048, 00:15:38.335 "data_size": 63488 00:15:38.335 }, 00:15:38.335 { 00:15:38.335 "name": "BaseBdev4", 00:15:38.335 "uuid": "0186723a-899c-5d44-a6b2-477f86e44dec", 00:15:38.335 "is_configured": true, 00:15:38.335 "data_offset": 2048, 00:15:38.335 "data_size": 63488 00:15:38.335 } 00:15:38.335 ] 00:15:38.335 }' 00:15:38.335 03:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.335 03:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:38.335 03:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.335 03:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:38.335 03:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:39.274 [2024-11-18 03:15:42.540277] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:39.274 [2024-11-18 03:15:42.540473] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:39.274 [2024-11-18 03:15:42.540649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.274 03:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:39.274 03:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.274 03:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.274 03:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.274 03:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.274 03:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.274 03:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.274 03:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.274 03:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.274 03:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.274 03:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.274 03:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.274 "name": "raid_bdev1", 00:15:39.274 "uuid": "5f142baf-0aba-4e50-8f95-2f74118e9252", 00:15:39.274 "strip_size_kb": 64, 00:15:39.274 "state": "online", 00:15:39.274 "raid_level": "raid5f", 00:15:39.274 "superblock": true, 00:15:39.274 "num_base_bdevs": 4, 00:15:39.274 "num_base_bdevs_discovered": 4, 00:15:39.274 "num_base_bdevs_operational": 4, 00:15:39.274 "base_bdevs_list": [ 00:15:39.274 { 00:15:39.274 "name": "spare", 00:15:39.274 "uuid": "f52bf927-0f02-5bb2-bc07-bc2a631d2a4a", 00:15:39.274 "is_configured": true, 00:15:39.274 "data_offset": 2048, 00:15:39.274 "data_size": 63488 00:15:39.274 }, 00:15:39.274 { 00:15:39.274 "name": "BaseBdev2", 00:15:39.274 "uuid": "8ab30100-4d2e-5b59-8c90-65f92e020d3f", 00:15:39.274 "is_configured": true, 00:15:39.274 "data_offset": 2048, 00:15:39.274 "data_size": 63488 00:15:39.274 }, 00:15:39.274 { 00:15:39.274 "name": "BaseBdev3", 00:15:39.274 "uuid": "6873d50b-d891-5eab-a99a-5b332f463d03", 00:15:39.274 "is_configured": true, 00:15:39.274 "data_offset": 2048, 00:15:39.274 "data_size": 63488 00:15:39.274 }, 00:15:39.274 { 00:15:39.274 "name": "BaseBdev4", 00:15:39.274 "uuid": "0186723a-899c-5d44-a6b2-477f86e44dec", 00:15:39.274 "is_configured": true, 00:15:39.274 "data_offset": 2048, 00:15:39.274 "data_size": 63488 00:15:39.274 } 00:15:39.274 ] 00:15:39.274 }' 00:15:39.274 03:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.534 03:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:39.534 03:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.534 03:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:39.534 03:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:39.534 03:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:39.534 03:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.534 03:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:39.534 03:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:39.534 03:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.534 03:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.534 03:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.534 03:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.534 03:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.534 03:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.534 03:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.534 "name": "raid_bdev1", 00:15:39.534 "uuid": "5f142baf-0aba-4e50-8f95-2f74118e9252", 00:15:39.534 "strip_size_kb": 64, 00:15:39.534 "state": "online", 00:15:39.534 "raid_level": "raid5f", 00:15:39.534 "superblock": true, 00:15:39.534 "num_base_bdevs": 4, 00:15:39.534 "num_base_bdevs_discovered": 4, 00:15:39.534 "num_base_bdevs_operational": 4, 00:15:39.534 "base_bdevs_list": [ 00:15:39.534 { 00:15:39.534 "name": "spare", 00:15:39.534 "uuid": "f52bf927-0f02-5bb2-bc07-bc2a631d2a4a", 00:15:39.534 "is_configured": true, 00:15:39.534 "data_offset": 2048, 00:15:39.534 "data_size": 63488 00:15:39.534 }, 00:15:39.534 { 00:15:39.534 "name": "BaseBdev2", 00:15:39.534 "uuid": "8ab30100-4d2e-5b59-8c90-65f92e020d3f", 00:15:39.534 "is_configured": true, 00:15:39.534 "data_offset": 2048, 00:15:39.534 "data_size": 63488 00:15:39.534 }, 00:15:39.534 { 00:15:39.534 "name": "BaseBdev3", 00:15:39.534 "uuid": "6873d50b-d891-5eab-a99a-5b332f463d03", 00:15:39.534 "is_configured": true, 00:15:39.534 "data_offset": 2048, 00:15:39.534 "data_size": 63488 00:15:39.534 }, 00:15:39.534 { 00:15:39.534 "name": "BaseBdev4", 00:15:39.534 "uuid": "0186723a-899c-5d44-a6b2-477f86e44dec", 00:15:39.534 "is_configured": true, 00:15:39.534 "data_offset": 2048, 00:15:39.534 "data_size": 63488 00:15:39.534 } 00:15:39.534 ] 00:15:39.534 }' 00:15:39.534 03:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.534 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:39.534 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.534 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:39.534 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:39.534 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.534 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.534 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.534 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.534 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.534 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.534 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.534 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.534 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.534 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.534 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.534 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.534 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.534 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.534 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.534 "name": "raid_bdev1", 00:15:39.534 "uuid": "5f142baf-0aba-4e50-8f95-2f74118e9252", 00:15:39.534 "strip_size_kb": 64, 00:15:39.534 "state": "online", 00:15:39.534 "raid_level": "raid5f", 00:15:39.534 "superblock": true, 00:15:39.534 "num_base_bdevs": 4, 00:15:39.534 "num_base_bdevs_discovered": 4, 00:15:39.535 "num_base_bdevs_operational": 4, 00:15:39.535 "base_bdevs_list": [ 00:15:39.535 { 00:15:39.535 "name": "spare", 00:15:39.535 "uuid": "f52bf927-0f02-5bb2-bc07-bc2a631d2a4a", 00:15:39.535 "is_configured": true, 00:15:39.535 "data_offset": 2048, 00:15:39.535 "data_size": 63488 00:15:39.535 }, 00:15:39.535 { 00:15:39.535 "name": "BaseBdev2", 00:15:39.535 "uuid": "8ab30100-4d2e-5b59-8c90-65f92e020d3f", 00:15:39.535 "is_configured": true, 00:15:39.535 "data_offset": 2048, 00:15:39.535 "data_size": 63488 00:15:39.535 }, 00:15:39.535 { 00:15:39.535 "name": "BaseBdev3", 00:15:39.535 "uuid": "6873d50b-d891-5eab-a99a-5b332f463d03", 00:15:39.535 "is_configured": true, 00:15:39.535 "data_offset": 2048, 00:15:39.535 "data_size": 63488 00:15:39.535 }, 00:15:39.535 { 00:15:39.535 "name": "BaseBdev4", 00:15:39.535 "uuid": "0186723a-899c-5d44-a6b2-477f86e44dec", 00:15:39.535 "is_configured": true, 00:15:39.535 "data_offset": 2048, 00:15:39.535 "data_size": 63488 00:15:39.535 } 00:15:39.535 ] 00:15:39.535 }' 00:15:39.535 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.535 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.105 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:40.105 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.105 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.105 [2024-11-18 03:15:43.444671] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:40.105 [2024-11-18 03:15:43.444755] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:40.105 [2024-11-18 03:15:43.444867] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:40.105 [2024-11-18 03:15:43.444985] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:40.105 [2024-11-18 03:15:43.445003] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:40.105 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.105 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.105 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:40.105 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.105 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.105 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.105 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:40.105 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:40.105 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:40.105 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:40.105 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:40.105 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:40.105 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:40.105 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:40.105 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:40.105 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:40.105 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:40.105 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:40.105 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:40.365 /dev/nbd0 00:15:40.365 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:40.365 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:40.365 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:40.366 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:40.366 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:40.366 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:40.366 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:40.366 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:40.366 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:40.366 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:40.366 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.366 1+0 records in 00:15:40.366 1+0 records out 00:15:40.366 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351188 s, 11.7 MB/s 00:15:40.366 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.366 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:40.366 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.366 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:40.366 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:40.366 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:40.366 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:40.366 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:40.626 /dev/nbd1 00:15:40.626 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:40.626 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:40.626 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:40.626 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:40.626 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:40.626 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:40.626 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:40.626 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:40.626 03:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:40.626 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:40.626 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.626 1+0 records in 00:15:40.626 1+0 records out 00:15:40.626 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379205 s, 10.8 MB/s 00:15:40.626 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.626 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:40.626 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.626 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:40.626 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:40.626 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:40.626 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:40.626 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:40.626 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:40.626 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:40.626 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:40.626 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:40.626 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:40.626 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.626 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:40.886 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:40.886 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:40.886 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:40.886 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.886 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.886 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:40.886 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:40.886 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.886 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.886 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.147 [2024-11-18 03:15:44.526172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:41.147 [2024-11-18 03:15:44.526234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.147 [2024-11-18 03:15:44.526256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:41.147 [2024-11-18 03:15:44.526267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.147 [2024-11-18 03:15:44.528461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.147 [2024-11-18 03:15:44.528548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:41.147 [2024-11-18 03:15:44.528648] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:41.147 [2024-11-18 03:15:44.528692] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:41.147 [2024-11-18 03:15:44.528806] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:41.147 [2024-11-18 03:15:44.528904] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:41.147 [2024-11-18 03:15:44.528982] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:41.147 spare 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.147 [2024-11-18 03:15:44.628894] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:15:41.147 [2024-11-18 03:15:44.628948] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:41.147 [2024-11-18 03:15:44.629296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049030 00:15:41.147 [2024-11-18 03:15:44.629792] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:15:41.147 [2024-11-18 03:15:44.629812] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:15:41.147 [2024-11-18 03:15:44.630032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.147 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.147 "name": "raid_bdev1", 00:15:41.147 "uuid": "5f142baf-0aba-4e50-8f95-2f74118e9252", 00:15:41.147 "strip_size_kb": 64, 00:15:41.147 "state": "online", 00:15:41.147 "raid_level": "raid5f", 00:15:41.147 "superblock": true, 00:15:41.147 "num_base_bdevs": 4, 00:15:41.147 "num_base_bdevs_discovered": 4, 00:15:41.147 "num_base_bdevs_operational": 4, 00:15:41.147 "base_bdevs_list": [ 00:15:41.147 { 00:15:41.147 "name": "spare", 00:15:41.147 "uuid": "f52bf927-0f02-5bb2-bc07-bc2a631d2a4a", 00:15:41.147 "is_configured": true, 00:15:41.147 "data_offset": 2048, 00:15:41.147 "data_size": 63488 00:15:41.147 }, 00:15:41.147 { 00:15:41.147 "name": "BaseBdev2", 00:15:41.148 "uuid": "8ab30100-4d2e-5b59-8c90-65f92e020d3f", 00:15:41.148 "is_configured": true, 00:15:41.148 "data_offset": 2048, 00:15:41.148 "data_size": 63488 00:15:41.148 }, 00:15:41.148 { 00:15:41.148 "name": "BaseBdev3", 00:15:41.148 "uuid": "6873d50b-d891-5eab-a99a-5b332f463d03", 00:15:41.148 "is_configured": true, 00:15:41.148 "data_offset": 2048, 00:15:41.148 "data_size": 63488 00:15:41.148 }, 00:15:41.148 { 00:15:41.148 "name": "BaseBdev4", 00:15:41.148 "uuid": "0186723a-899c-5d44-a6b2-477f86e44dec", 00:15:41.148 "is_configured": true, 00:15:41.148 "data_offset": 2048, 00:15:41.148 "data_size": 63488 00:15:41.148 } 00:15:41.148 ] 00:15:41.148 }' 00:15:41.148 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.148 03:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.719 "name": "raid_bdev1", 00:15:41.719 "uuid": "5f142baf-0aba-4e50-8f95-2f74118e9252", 00:15:41.719 "strip_size_kb": 64, 00:15:41.719 "state": "online", 00:15:41.719 "raid_level": "raid5f", 00:15:41.719 "superblock": true, 00:15:41.719 "num_base_bdevs": 4, 00:15:41.719 "num_base_bdevs_discovered": 4, 00:15:41.719 "num_base_bdevs_operational": 4, 00:15:41.719 "base_bdevs_list": [ 00:15:41.719 { 00:15:41.719 "name": "spare", 00:15:41.719 "uuid": "f52bf927-0f02-5bb2-bc07-bc2a631d2a4a", 00:15:41.719 "is_configured": true, 00:15:41.719 "data_offset": 2048, 00:15:41.719 "data_size": 63488 00:15:41.719 }, 00:15:41.719 { 00:15:41.719 "name": "BaseBdev2", 00:15:41.719 "uuid": "8ab30100-4d2e-5b59-8c90-65f92e020d3f", 00:15:41.719 "is_configured": true, 00:15:41.719 "data_offset": 2048, 00:15:41.719 "data_size": 63488 00:15:41.719 }, 00:15:41.719 { 00:15:41.719 "name": "BaseBdev3", 00:15:41.719 "uuid": "6873d50b-d891-5eab-a99a-5b332f463d03", 00:15:41.719 "is_configured": true, 00:15:41.719 "data_offset": 2048, 00:15:41.719 "data_size": 63488 00:15:41.719 }, 00:15:41.719 { 00:15:41.719 "name": "BaseBdev4", 00:15:41.719 "uuid": "0186723a-899c-5d44-a6b2-477f86e44dec", 00:15:41.719 "is_configured": true, 00:15:41.719 "data_offset": 2048, 00:15:41.719 "data_size": 63488 00:15:41.719 } 00:15:41.719 ] 00:15:41.719 }' 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.719 [2024-11-18 03:15:45.181112] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.719 "name": "raid_bdev1", 00:15:41.719 "uuid": "5f142baf-0aba-4e50-8f95-2f74118e9252", 00:15:41.719 "strip_size_kb": 64, 00:15:41.719 "state": "online", 00:15:41.719 "raid_level": "raid5f", 00:15:41.719 "superblock": true, 00:15:41.719 "num_base_bdevs": 4, 00:15:41.719 "num_base_bdevs_discovered": 3, 00:15:41.719 "num_base_bdevs_operational": 3, 00:15:41.719 "base_bdevs_list": [ 00:15:41.719 { 00:15:41.719 "name": null, 00:15:41.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.719 "is_configured": false, 00:15:41.719 "data_offset": 0, 00:15:41.719 "data_size": 63488 00:15:41.719 }, 00:15:41.719 { 00:15:41.719 "name": "BaseBdev2", 00:15:41.719 "uuid": "8ab30100-4d2e-5b59-8c90-65f92e020d3f", 00:15:41.719 "is_configured": true, 00:15:41.719 "data_offset": 2048, 00:15:41.719 "data_size": 63488 00:15:41.719 }, 00:15:41.719 { 00:15:41.719 "name": "BaseBdev3", 00:15:41.719 "uuid": "6873d50b-d891-5eab-a99a-5b332f463d03", 00:15:41.719 "is_configured": true, 00:15:41.719 "data_offset": 2048, 00:15:41.719 "data_size": 63488 00:15:41.719 }, 00:15:41.719 { 00:15:41.719 "name": "BaseBdev4", 00:15:41.719 "uuid": "0186723a-899c-5d44-a6b2-477f86e44dec", 00:15:41.719 "is_configured": true, 00:15:41.719 "data_offset": 2048, 00:15:41.719 "data_size": 63488 00:15:41.719 } 00:15:41.719 ] 00:15:41.719 }' 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.719 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.290 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:42.290 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.290 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.290 [2024-11-18 03:15:45.604429] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:42.290 [2024-11-18 03:15:45.604708] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:42.290 [2024-11-18 03:15:45.604778] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:42.290 [2024-11-18 03:15:45.604864] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:42.290 [2024-11-18 03:15:45.608199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049100 00:15:42.290 [2024-11-18 03:15:45.610587] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:42.290 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.290 03:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:43.230 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.230 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.230 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.230 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.230 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.230 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.230 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.230 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.230 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.230 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.230 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.230 "name": "raid_bdev1", 00:15:43.230 "uuid": "5f142baf-0aba-4e50-8f95-2f74118e9252", 00:15:43.230 "strip_size_kb": 64, 00:15:43.230 "state": "online", 00:15:43.230 "raid_level": "raid5f", 00:15:43.230 "superblock": true, 00:15:43.230 "num_base_bdevs": 4, 00:15:43.230 "num_base_bdevs_discovered": 4, 00:15:43.230 "num_base_bdevs_operational": 4, 00:15:43.230 "process": { 00:15:43.230 "type": "rebuild", 00:15:43.230 "target": "spare", 00:15:43.230 "progress": { 00:15:43.230 "blocks": 19200, 00:15:43.230 "percent": 10 00:15:43.230 } 00:15:43.230 }, 00:15:43.230 "base_bdevs_list": [ 00:15:43.230 { 00:15:43.230 "name": "spare", 00:15:43.230 "uuid": "f52bf927-0f02-5bb2-bc07-bc2a631d2a4a", 00:15:43.230 "is_configured": true, 00:15:43.230 "data_offset": 2048, 00:15:43.230 "data_size": 63488 00:15:43.230 }, 00:15:43.230 { 00:15:43.230 "name": "BaseBdev2", 00:15:43.230 "uuid": "8ab30100-4d2e-5b59-8c90-65f92e020d3f", 00:15:43.230 "is_configured": true, 00:15:43.230 "data_offset": 2048, 00:15:43.230 "data_size": 63488 00:15:43.230 }, 00:15:43.230 { 00:15:43.230 "name": "BaseBdev3", 00:15:43.230 "uuid": "6873d50b-d891-5eab-a99a-5b332f463d03", 00:15:43.230 "is_configured": true, 00:15:43.230 "data_offset": 2048, 00:15:43.230 "data_size": 63488 00:15:43.230 }, 00:15:43.230 { 00:15:43.230 "name": "BaseBdev4", 00:15:43.230 "uuid": "0186723a-899c-5d44-a6b2-477f86e44dec", 00:15:43.230 "is_configured": true, 00:15:43.230 "data_offset": 2048, 00:15:43.230 "data_size": 63488 00:15:43.230 } 00:15:43.231 ] 00:15:43.231 }' 00:15:43.231 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.231 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.231 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.231 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.231 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:43.231 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.231 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.231 [2024-11-18 03:15:46.753842] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:43.491 [2024-11-18 03:15:46.818738] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:43.491 [2024-11-18 03:15:46.818902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.491 [2024-11-18 03:15:46.818951] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:43.491 [2024-11-18 03:15:46.818987] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:43.491 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.491 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:43.491 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.491 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.491 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.491 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.491 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.491 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.491 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.491 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.491 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.491 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.491 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.491 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.491 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.491 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.491 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.491 "name": "raid_bdev1", 00:15:43.491 "uuid": "5f142baf-0aba-4e50-8f95-2f74118e9252", 00:15:43.491 "strip_size_kb": 64, 00:15:43.491 "state": "online", 00:15:43.491 "raid_level": "raid5f", 00:15:43.491 "superblock": true, 00:15:43.491 "num_base_bdevs": 4, 00:15:43.491 "num_base_bdevs_discovered": 3, 00:15:43.491 "num_base_bdevs_operational": 3, 00:15:43.491 "base_bdevs_list": [ 00:15:43.491 { 00:15:43.491 "name": null, 00:15:43.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.491 "is_configured": false, 00:15:43.491 "data_offset": 0, 00:15:43.491 "data_size": 63488 00:15:43.491 }, 00:15:43.491 { 00:15:43.491 "name": "BaseBdev2", 00:15:43.491 "uuid": "8ab30100-4d2e-5b59-8c90-65f92e020d3f", 00:15:43.491 "is_configured": true, 00:15:43.491 "data_offset": 2048, 00:15:43.491 "data_size": 63488 00:15:43.491 }, 00:15:43.491 { 00:15:43.491 "name": "BaseBdev3", 00:15:43.491 "uuid": "6873d50b-d891-5eab-a99a-5b332f463d03", 00:15:43.491 "is_configured": true, 00:15:43.491 "data_offset": 2048, 00:15:43.491 "data_size": 63488 00:15:43.491 }, 00:15:43.491 { 00:15:43.491 "name": "BaseBdev4", 00:15:43.491 "uuid": "0186723a-899c-5d44-a6b2-477f86e44dec", 00:15:43.491 "is_configured": true, 00:15:43.491 "data_offset": 2048, 00:15:43.491 "data_size": 63488 00:15:43.491 } 00:15:43.491 ] 00:15:43.491 }' 00:15:43.491 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.491 03:15:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.751 03:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:43.751 03:15:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.751 03:15:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.751 [2024-11-18 03:15:47.243662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:43.751 [2024-11-18 03:15:47.243735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.751 [2024-11-18 03:15:47.243767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:15:43.751 [2024-11-18 03:15:47.243777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.751 [2024-11-18 03:15:47.244286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.751 [2024-11-18 03:15:47.244306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:43.751 [2024-11-18 03:15:47.244400] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:43.751 [2024-11-18 03:15:47.244419] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:43.751 [2024-11-18 03:15:47.244446] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:43.751 [2024-11-18 03:15:47.244477] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:43.751 [2024-11-18 03:15:47.247848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:15:43.751 spare 00:15:43.751 03:15:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.751 03:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:43.751 [2024-11-18 03:15:47.250364] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:44.693 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.693 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.693 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.693 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.693 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.693 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.693 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.693 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.693 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.953 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.953 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.953 "name": "raid_bdev1", 00:15:44.953 "uuid": "5f142baf-0aba-4e50-8f95-2f74118e9252", 00:15:44.953 "strip_size_kb": 64, 00:15:44.953 "state": "online", 00:15:44.953 "raid_level": "raid5f", 00:15:44.953 "superblock": true, 00:15:44.953 "num_base_bdevs": 4, 00:15:44.953 "num_base_bdevs_discovered": 4, 00:15:44.953 "num_base_bdevs_operational": 4, 00:15:44.953 "process": { 00:15:44.953 "type": "rebuild", 00:15:44.953 "target": "spare", 00:15:44.953 "progress": { 00:15:44.953 "blocks": 19200, 00:15:44.953 "percent": 10 00:15:44.953 } 00:15:44.953 }, 00:15:44.953 "base_bdevs_list": [ 00:15:44.953 { 00:15:44.953 "name": "spare", 00:15:44.953 "uuid": "f52bf927-0f02-5bb2-bc07-bc2a631d2a4a", 00:15:44.953 "is_configured": true, 00:15:44.953 "data_offset": 2048, 00:15:44.953 "data_size": 63488 00:15:44.953 }, 00:15:44.953 { 00:15:44.953 "name": "BaseBdev2", 00:15:44.953 "uuid": "8ab30100-4d2e-5b59-8c90-65f92e020d3f", 00:15:44.953 "is_configured": true, 00:15:44.953 "data_offset": 2048, 00:15:44.953 "data_size": 63488 00:15:44.953 }, 00:15:44.953 { 00:15:44.953 "name": "BaseBdev3", 00:15:44.953 "uuid": "6873d50b-d891-5eab-a99a-5b332f463d03", 00:15:44.953 "is_configured": true, 00:15:44.953 "data_offset": 2048, 00:15:44.953 "data_size": 63488 00:15:44.953 }, 00:15:44.953 { 00:15:44.953 "name": "BaseBdev4", 00:15:44.953 "uuid": "0186723a-899c-5d44-a6b2-477f86e44dec", 00:15:44.953 "is_configured": true, 00:15:44.953 "data_offset": 2048, 00:15:44.953 "data_size": 63488 00:15:44.953 } 00:15:44.953 ] 00:15:44.953 }' 00:15:44.953 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.953 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.953 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.953 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.953 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:44.953 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.953 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.953 [2024-11-18 03:15:48.406655] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:44.953 [2024-11-18 03:15:48.458515] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:44.953 [2024-11-18 03:15:48.458680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.953 [2024-11-18 03:15:48.458731] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:44.953 [2024-11-18 03:15:48.458772] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:44.953 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.953 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:44.953 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.953 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.953 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.953 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.954 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.954 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.954 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.954 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.954 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.954 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.954 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.954 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.954 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.954 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.954 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.954 "name": "raid_bdev1", 00:15:44.954 "uuid": "5f142baf-0aba-4e50-8f95-2f74118e9252", 00:15:44.954 "strip_size_kb": 64, 00:15:44.954 "state": "online", 00:15:44.954 "raid_level": "raid5f", 00:15:44.954 "superblock": true, 00:15:44.954 "num_base_bdevs": 4, 00:15:44.954 "num_base_bdevs_discovered": 3, 00:15:44.954 "num_base_bdevs_operational": 3, 00:15:44.954 "base_bdevs_list": [ 00:15:44.954 { 00:15:44.954 "name": null, 00:15:44.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.954 "is_configured": false, 00:15:44.954 "data_offset": 0, 00:15:44.954 "data_size": 63488 00:15:44.954 }, 00:15:44.954 { 00:15:44.954 "name": "BaseBdev2", 00:15:44.954 "uuid": "8ab30100-4d2e-5b59-8c90-65f92e020d3f", 00:15:44.954 "is_configured": true, 00:15:44.954 "data_offset": 2048, 00:15:44.954 "data_size": 63488 00:15:44.954 }, 00:15:44.954 { 00:15:44.954 "name": "BaseBdev3", 00:15:44.954 "uuid": "6873d50b-d891-5eab-a99a-5b332f463d03", 00:15:44.954 "is_configured": true, 00:15:44.954 "data_offset": 2048, 00:15:44.954 "data_size": 63488 00:15:44.954 }, 00:15:44.954 { 00:15:44.954 "name": "BaseBdev4", 00:15:44.954 "uuid": "0186723a-899c-5d44-a6b2-477f86e44dec", 00:15:44.954 "is_configured": true, 00:15:44.954 "data_offset": 2048, 00:15:44.954 "data_size": 63488 00:15:44.954 } 00:15:44.954 ] 00:15:44.954 }' 00:15:44.954 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.954 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.524 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:45.524 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.524 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:45.524 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:45.524 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.524 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.524 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.524 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.524 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.524 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.524 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.524 "name": "raid_bdev1", 00:15:45.524 "uuid": "5f142baf-0aba-4e50-8f95-2f74118e9252", 00:15:45.524 "strip_size_kb": 64, 00:15:45.524 "state": "online", 00:15:45.524 "raid_level": "raid5f", 00:15:45.524 "superblock": true, 00:15:45.524 "num_base_bdevs": 4, 00:15:45.524 "num_base_bdevs_discovered": 3, 00:15:45.524 "num_base_bdevs_operational": 3, 00:15:45.524 "base_bdevs_list": [ 00:15:45.524 { 00:15:45.524 "name": null, 00:15:45.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.524 "is_configured": false, 00:15:45.524 "data_offset": 0, 00:15:45.524 "data_size": 63488 00:15:45.524 }, 00:15:45.524 { 00:15:45.524 "name": "BaseBdev2", 00:15:45.524 "uuid": "8ab30100-4d2e-5b59-8c90-65f92e020d3f", 00:15:45.524 "is_configured": true, 00:15:45.524 "data_offset": 2048, 00:15:45.524 "data_size": 63488 00:15:45.524 }, 00:15:45.524 { 00:15:45.524 "name": "BaseBdev3", 00:15:45.524 "uuid": "6873d50b-d891-5eab-a99a-5b332f463d03", 00:15:45.524 "is_configured": true, 00:15:45.524 "data_offset": 2048, 00:15:45.524 "data_size": 63488 00:15:45.524 }, 00:15:45.524 { 00:15:45.524 "name": "BaseBdev4", 00:15:45.524 "uuid": "0186723a-899c-5d44-a6b2-477f86e44dec", 00:15:45.524 "is_configured": true, 00:15:45.524 "data_offset": 2048, 00:15:45.524 "data_size": 63488 00:15:45.524 } 00:15:45.524 ] 00:15:45.524 }' 00:15:45.524 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.524 03:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:45.524 03:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.524 03:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:45.524 03:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:45.524 03:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.524 03:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.524 03:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.524 03:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:45.524 03:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.524 03:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.524 [2024-11-18 03:15:49.043200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:45.524 [2024-11-18 03:15:49.043267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.524 [2024-11-18 03:15:49.043288] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:15:45.524 [2024-11-18 03:15:49.043299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.524 [2024-11-18 03:15:49.043766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.524 [2024-11-18 03:15:49.043788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:45.524 [2024-11-18 03:15:49.043866] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:45.524 [2024-11-18 03:15:49.043886] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:45.524 [2024-11-18 03:15:49.043894] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:45.525 [2024-11-18 03:15:49.043908] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:45.525 BaseBdev1 00:15:45.525 03:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.525 03:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:46.906 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:46.906 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.906 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.906 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.906 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.906 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.906 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.906 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.906 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.906 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.906 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.906 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.906 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.906 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.906 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.906 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.906 "name": "raid_bdev1", 00:15:46.906 "uuid": "5f142baf-0aba-4e50-8f95-2f74118e9252", 00:15:46.906 "strip_size_kb": 64, 00:15:46.906 "state": "online", 00:15:46.906 "raid_level": "raid5f", 00:15:46.906 "superblock": true, 00:15:46.906 "num_base_bdevs": 4, 00:15:46.906 "num_base_bdevs_discovered": 3, 00:15:46.906 "num_base_bdevs_operational": 3, 00:15:46.906 "base_bdevs_list": [ 00:15:46.906 { 00:15:46.906 "name": null, 00:15:46.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.906 "is_configured": false, 00:15:46.906 "data_offset": 0, 00:15:46.906 "data_size": 63488 00:15:46.906 }, 00:15:46.906 { 00:15:46.906 "name": "BaseBdev2", 00:15:46.906 "uuid": "8ab30100-4d2e-5b59-8c90-65f92e020d3f", 00:15:46.906 "is_configured": true, 00:15:46.906 "data_offset": 2048, 00:15:46.906 "data_size": 63488 00:15:46.906 }, 00:15:46.906 { 00:15:46.906 "name": "BaseBdev3", 00:15:46.906 "uuid": "6873d50b-d891-5eab-a99a-5b332f463d03", 00:15:46.906 "is_configured": true, 00:15:46.906 "data_offset": 2048, 00:15:46.906 "data_size": 63488 00:15:46.906 }, 00:15:46.906 { 00:15:46.906 "name": "BaseBdev4", 00:15:46.906 "uuid": "0186723a-899c-5d44-a6b2-477f86e44dec", 00:15:46.906 "is_configured": true, 00:15:46.906 "data_offset": 2048, 00:15:46.906 "data_size": 63488 00:15:46.906 } 00:15:46.906 ] 00:15:46.906 }' 00:15:46.906 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.906 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.166 "name": "raid_bdev1", 00:15:47.166 "uuid": "5f142baf-0aba-4e50-8f95-2f74118e9252", 00:15:47.166 "strip_size_kb": 64, 00:15:47.166 "state": "online", 00:15:47.166 "raid_level": "raid5f", 00:15:47.166 "superblock": true, 00:15:47.166 "num_base_bdevs": 4, 00:15:47.166 "num_base_bdevs_discovered": 3, 00:15:47.166 "num_base_bdevs_operational": 3, 00:15:47.166 "base_bdevs_list": [ 00:15:47.166 { 00:15:47.166 "name": null, 00:15:47.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.166 "is_configured": false, 00:15:47.166 "data_offset": 0, 00:15:47.166 "data_size": 63488 00:15:47.166 }, 00:15:47.166 { 00:15:47.166 "name": "BaseBdev2", 00:15:47.166 "uuid": "8ab30100-4d2e-5b59-8c90-65f92e020d3f", 00:15:47.166 "is_configured": true, 00:15:47.166 "data_offset": 2048, 00:15:47.166 "data_size": 63488 00:15:47.166 }, 00:15:47.166 { 00:15:47.166 "name": "BaseBdev3", 00:15:47.166 "uuid": "6873d50b-d891-5eab-a99a-5b332f463d03", 00:15:47.166 "is_configured": true, 00:15:47.166 "data_offset": 2048, 00:15:47.166 "data_size": 63488 00:15:47.166 }, 00:15:47.166 { 00:15:47.166 "name": "BaseBdev4", 00:15:47.166 "uuid": "0186723a-899c-5d44-a6b2-477f86e44dec", 00:15:47.166 "is_configured": true, 00:15:47.166 "data_offset": 2048, 00:15:47.166 "data_size": 63488 00:15:47.166 } 00:15:47.166 ] 00:15:47.166 }' 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.166 [2024-11-18 03:15:50.656693] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:47.166 [2024-11-18 03:15:50.656907] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:47.166 [2024-11-18 03:15:50.656929] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:47.166 request: 00:15:47.166 { 00:15:47.166 "base_bdev": "BaseBdev1", 00:15:47.166 "raid_bdev": "raid_bdev1", 00:15:47.166 "method": "bdev_raid_add_base_bdev", 00:15:47.166 "req_id": 1 00:15:47.166 } 00:15:47.166 Got JSON-RPC error response 00:15:47.166 response: 00:15:47.166 { 00:15:47.166 "code": -22, 00:15:47.166 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:47.166 } 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:47.166 03:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:48.104 03:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:48.104 03:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.104 03:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.104 03:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.104 03:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.104 03:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.104 03:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.104 03:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.104 03:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.104 03:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.104 03:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.104 03:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.104 03:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.104 03:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.363 03:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.363 03:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.363 "name": "raid_bdev1", 00:15:48.363 "uuid": "5f142baf-0aba-4e50-8f95-2f74118e9252", 00:15:48.363 "strip_size_kb": 64, 00:15:48.363 "state": "online", 00:15:48.363 "raid_level": "raid5f", 00:15:48.363 "superblock": true, 00:15:48.363 "num_base_bdevs": 4, 00:15:48.363 "num_base_bdevs_discovered": 3, 00:15:48.363 "num_base_bdevs_operational": 3, 00:15:48.363 "base_bdevs_list": [ 00:15:48.363 { 00:15:48.363 "name": null, 00:15:48.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.363 "is_configured": false, 00:15:48.363 "data_offset": 0, 00:15:48.363 "data_size": 63488 00:15:48.363 }, 00:15:48.363 { 00:15:48.363 "name": "BaseBdev2", 00:15:48.363 "uuid": "8ab30100-4d2e-5b59-8c90-65f92e020d3f", 00:15:48.363 "is_configured": true, 00:15:48.363 "data_offset": 2048, 00:15:48.364 "data_size": 63488 00:15:48.364 }, 00:15:48.364 { 00:15:48.364 "name": "BaseBdev3", 00:15:48.364 "uuid": "6873d50b-d891-5eab-a99a-5b332f463d03", 00:15:48.364 "is_configured": true, 00:15:48.364 "data_offset": 2048, 00:15:48.364 "data_size": 63488 00:15:48.364 }, 00:15:48.364 { 00:15:48.364 "name": "BaseBdev4", 00:15:48.364 "uuid": "0186723a-899c-5d44-a6b2-477f86e44dec", 00:15:48.364 "is_configured": true, 00:15:48.364 "data_offset": 2048, 00:15:48.364 "data_size": 63488 00:15:48.364 } 00:15:48.364 ] 00:15:48.364 }' 00:15:48.364 03:15:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.364 03:15:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.634 03:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:48.634 03:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.634 03:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:48.634 03:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:48.634 03:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.634 03:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.634 03:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.634 03:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.634 03:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.634 03:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.634 03:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.634 "name": "raid_bdev1", 00:15:48.634 "uuid": "5f142baf-0aba-4e50-8f95-2f74118e9252", 00:15:48.634 "strip_size_kb": 64, 00:15:48.634 "state": "online", 00:15:48.634 "raid_level": "raid5f", 00:15:48.634 "superblock": true, 00:15:48.634 "num_base_bdevs": 4, 00:15:48.634 "num_base_bdevs_discovered": 3, 00:15:48.634 "num_base_bdevs_operational": 3, 00:15:48.634 "base_bdevs_list": [ 00:15:48.634 { 00:15:48.634 "name": null, 00:15:48.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.634 "is_configured": false, 00:15:48.634 "data_offset": 0, 00:15:48.634 "data_size": 63488 00:15:48.634 }, 00:15:48.634 { 00:15:48.634 "name": "BaseBdev2", 00:15:48.634 "uuid": "8ab30100-4d2e-5b59-8c90-65f92e020d3f", 00:15:48.634 "is_configured": true, 00:15:48.634 "data_offset": 2048, 00:15:48.634 "data_size": 63488 00:15:48.634 }, 00:15:48.634 { 00:15:48.634 "name": "BaseBdev3", 00:15:48.634 "uuid": "6873d50b-d891-5eab-a99a-5b332f463d03", 00:15:48.634 "is_configured": true, 00:15:48.634 "data_offset": 2048, 00:15:48.634 "data_size": 63488 00:15:48.634 }, 00:15:48.634 { 00:15:48.634 "name": "BaseBdev4", 00:15:48.634 "uuid": "0186723a-899c-5d44-a6b2-477f86e44dec", 00:15:48.634 "is_configured": true, 00:15:48.634 "data_offset": 2048, 00:15:48.634 "data_size": 63488 00:15:48.634 } 00:15:48.634 ] 00:15:48.634 }' 00:15:48.634 03:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.634 03:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:48.634 03:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.907 03:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:48.907 03:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 95631 00:15:48.907 03:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 95631 ']' 00:15:48.908 03:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 95631 00:15:48.908 03:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:48.908 03:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:48.908 03:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95631 00:15:48.908 killing process with pid 95631 00:15:48.908 Received shutdown signal, test time was about 60.000000 seconds 00:15:48.908 00:15:48.908 Latency(us) 00:15:48.908 [2024-11-18T03:15:52.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:48.908 [2024-11-18T03:15:52.485Z] =================================================================================================================== 00:15:48.908 [2024-11-18T03:15:52.485Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:48.908 03:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:48.908 03:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:48.908 03:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95631' 00:15:48.908 03:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 95631 00:15:48.908 03:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 95631 00:15:48.908 [2024-11-18 03:15:52.261774] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:48.908 [2024-11-18 03:15:52.261897] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:48.908 [2024-11-18 03:15:52.262009] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:48.908 [2024-11-18 03:15:52.262021] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:15:48.908 [2024-11-18 03:15:52.314496] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:49.168 03:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:49.168 00:15:49.168 real 0m24.811s 00:15:49.168 user 0m31.363s 00:15:49.168 sys 0m2.873s 00:15:49.168 03:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:49.168 ************************************ 00:15:49.168 END TEST raid5f_rebuild_test_sb 00:15:49.168 ************************************ 00:15:49.168 03:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.168 03:15:52 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:15:49.168 03:15:52 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:15:49.168 03:15:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:49.168 03:15:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:49.168 03:15:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:49.168 ************************************ 00:15:49.168 START TEST raid_state_function_test_sb_4k 00:15:49.168 ************************************ 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=96429 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 96429' 00:15:49.168 Process raid pid: 96429 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 96429 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 96429 ']' 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:49.168 03:15:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.168 [2024-11-18 03:15:52.704010] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:49.168 [2024-11-18 03:15:52.704155] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.428 [2024-11-18 03:15:52.865575] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.428 [2024-11-18 03:15:52.916284] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.428 [2024-11-18 03:15:52.959175] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.428 [2024-11-18 03:15:52.959219] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.996 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:49.996 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:15:49.996 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:49.996 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.996 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.996 [2024-11-18 03:15:53.568793] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:49.996 [2024-11-18 03:15:53.568849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:49.996 [2024-11-18 03:15:53.568871] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:49.996 [2024-11-18 03:15:53.568883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:50.256 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.256 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:50.256 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.256 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.256 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.256 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.256 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:50.256 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.256 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.256 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.256 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.256 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.256 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.256 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.256 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.256 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.256 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.256 "name": "Existed_Raid", 00:15:50.256 "uuid": "8da1fe1c-b195-42b4-818f-aebc675b4f8b", 00:15:50.256 "strip_size_kb": 0, 00:15:50.256 "state": "configuring", 00:15:50.256 "raid_level": "raid1", 00:15:50.256 "superblock": true, 00:15:50.256 "num_base_bdevs": 2, 00:15:50.256 "num_base_bdevs_discovered": 0, 00:15:50.256 "num_base_bdevs_operational": 2, 00:15:50.256 "base_bdevs_list": [ 00:15:50.256 { 00:15:50.256 "name": "BaseBdev1", 00:15:50.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.256 "is_configured": false, 00:15:50.256 "data_offset": 0, 00:15:50.256 "data_size": 0 00:15:50.256 }, 00:15:50.256 { 00:15:50.256 "name": "BaseBdev2", 00:15:50.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.256 "is_configured": false, 00:15:50.256 "data_offset": 0, 00:15:50.256 "data_size": 0 00:15:50.256 } 00:15:50.256 ] 00:15:50.256 }' 00:15:50.256 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.256 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.516 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:50.516 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.516 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.516 [2024-11-18 03:15:53.976007] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:50.516 [2024-11-18 03:15:53.976104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:15:50.516 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.516 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:50.516 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.516 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.516 [2024-11-18 03:15:53.984019] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:50.516 [2024-11-18 03:15:53.984062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:50.516 [2024-11-18 03:15:53.984070] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:50.516 [2024-11-18 03:15:53.984080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:50.516 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.516 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:15:50.516 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.516 03:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.516 [2024-11-18 03:15:54.001034] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.516 BaseBdev1 00:15:50.516 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.516 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:50.516 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:50.516 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:50.516 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:15:50.516 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:50.516 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:50.516 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:50.516 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.516 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.516 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.516 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:50.516 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.516 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.516 [ 00:15:50.516 { 00:15:50.516 "name": "BaseBdev1", 00:15:50.516 "aliases": [ 00:15:50.516 "b4cce992-cbf0-45d0-ab86-9c5fc8a2f0f5" 00:15:50.516 ], 00:15:50.516 "product_name": "Malloc disk", 00:15:50.516 "block_size": 4096, 00:15:50.516 "num_blocks": 8192, 00:15:50.516 "uuid": "b4cce992-cbf0-45d0-ab86-9c5fc8a2f0f5", 00:15:50.516 "assigned_rate_limits": { 00:15:50.516 "rw_ios_per_sec": 0, 00:15:50.516 "rw_mbytes_per_sec": 0, 00:15:50.516 "r_mbytes_per_sec": 0, 00:15:50.516 "w_mbytes_per_sec": 0 00:15:50.516 }, 00:15:50.516 "claimed": true, 00:15:50.516 "claim_type": "exclusive_write", 00:15:50.516 "zoned": false, 00:15:50.517 "supported_io_types": { 00:15:50.517 "read": true, 00:15:50.517 "write": true, 00:15:50.517 "unmap": true, 00:15:50.517 "flush": true, 00:15:50.517 "reset": true, 00:15:50.517 "nvme_admin": false, 00:15:50.517 "nvme_io": false, 00:15:50.517 "nvme_io_md": false, 00:15:50.517 "write_zeroes": true, 00:15:50.517 "zcopy": true, 00:15:50.517 "get_zone_info": false, 00:15:50.517 "zone_management": false, 00:15:50.517 "zone_append": false, 00:15:50.517 "compare": false, 00:15:50.517 "compare_and_write": false, 00:15:50.517 "abort": true, 00:15:50.517 "seek_hole": false, 00:15:50.517 "seek_data": false, 00:15:50.517 "copy": true, 00:15:50.517 "nvme_iov_md": false 00:15:50.517 }, 00:15:50.517 "memory_domains": [ 00:15:50.517 { 00:15:50.517 "dma_device_id": "system", 00:15:50.517 "dma_device_type": 1 00:15:50.517 }, 00:15:50.517 { 00:15:50.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.517 "dma_device_type": 2 00:15:50.517 } 00:15:50.517 ], 00:15:50.517 "driver_specific": {} 00:15:50.517 } 00:15:50.517 ] 00:15:50.517 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.517 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:15:50.517 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:50.517 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.517 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.517 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.517 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.517 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:50.517 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.517 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.517 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.517 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.517 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.517 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.517 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.517 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.517 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.517 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.517 "name": "Existed_Raid", 00:15:50.517 "uuid": "7d662606-16b7-4fb2-95cf-a276ae19ae74", 00:15:50.517 "strip_size_kb": 0, 00:15:50.517 "state": "configuring", 00:15:50.517 "raid_level": "raid1", 00:15:50.517 "superblock": true, 00:15:50.517 "num_base_bdevs": 2, 00:15:50.517 "num_base_bdevs_discovered": 1, 00:15:50.517 "num_base_bdevs_operational": 2, 00:15:50.517 "base_bdevs_list": [ 00:15:50.517 { 00:15:50.517 "name": "BaseBdev1", 00:15:50.517 "uuid": "b4cce992-cbf0-45d0-ab86-9c5fc8a2f0f5", 00:15:50.517 "is_configured": true, 00:15:50.517 "data_offset": 256, 00:15:50.517 "data_size": 7936 00:15:50.517 }, 00:15:50.517 { 00:15:50.517 "name": "BaseBdev2", 00:15:50.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.517 "is_configured": false, 00:15:50.517 "data_offset": 0, 00:15:50.517 "data_size": 0 00:15:50.517 } 00:15:50.517 ] 00:15:50.517 }' 00:15:50.517 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.517 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.086 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:51.087 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.087 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.087 [2024-11-18 03:15:54.448309] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:51.087 [2024-11-18 03:15:54.448411] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:15:51.087 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.087 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:51.087 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.087 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.087 [2024-11-18 03:15:54.460318] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:51.087 [2024-11-18 03:15:54.462230] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:51.087 [2024-11-18 03:15:54.462305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:51.087 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.087 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:51.087 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:51.087 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:51.087 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.087 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.087 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.087 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.087 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:51.087 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.087 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.087 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.087 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.087 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.087 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.087 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.087 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.087 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.087 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.087 "name": "Existed_Raid", 00:15:51.087 "uuid": "dfd586b1-8195-4984-ac31-caf33bba5166", 00:15:51.087 "strip_size_kb": 0, 00:15:51.087 "state": "configuring", 00:15:51.087 "raid_level": "raid1", 00:15:51.087 "superblock": true, 00:15:51.087 "num_base_bdevs": 2, 00:15:51.087 "num_base_bdevs_discovered": 1, 00:15:51.087 "num_base_bdevs_operational": 2, 00:15:51.087 "base_bdevs_list": [ 00:15:51.087 { 00:15:51.087 "name": "BaseBdev1", 00:15:51.087 "uuid": "b4cce992-cbf0-45d0-ab86-9c5fc8a2f0f5", 00:15:51.087 "is_configured": true, 00:15:51.087 "data_offset": 256, 00:15:51.087 "data_size": 7936 00:15:51.087 }, 00:15:51.087 { 00:15:51.087 "name": "BaseBdev2", 00:15:51.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.087 "is_configured": false, 00:15:51.087 "data_offset": 0, 00:15:51.087 "data_size": 0 00:15:51.087 } 00:15:51.087 ] 00:15:51.087 }' 00:15:51.087 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.087 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.347 [2024-11-18 03:15:54.879481] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:51.347 [2024-11-18 03:15:54.879793] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:51.347 [2024-11-18 03:15:54.879850] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:51.347 [2024-11-18 03:15:54.880184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:51.347 BaseBdev2 00:15:51.347 [2024-11-18 03:15:54.880378] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:51.347 [2024-11-18 03:15:54.880414] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:15:51.347 [2024-11-18 03:15:54.880556] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.347 [ 00:15:51.347 { 00:15:51.347 "name": "BaseBdev2", 00:15:51.347 "aliases": [ 00:15:51.347 "f63cf740-4945-4f33-af6f-f77ed29a59df" 00:15:51.347 ], 00:15:51.347 "product_name": "Malloc disk", 00:15:51.347 "block_size": 4096, 00:15:51.347 "num_blocks": 8192, 00:15:51.347 "uuid": "f63cf740-4945-4f33-af6f-f77ed29a59df", 00:15:51.347 "assigned_rate_limits": { 00:15:51.347 "rw_ios_per_sec": 0, 00:15:51.347 "rw_mbytes_per_sec": 0, 00:15:51.347 "r_mbytes_per_sec": 0, 00:15:51.347 "w_mbytes_per_sec": 0 00:15:51.347 }, 00:15:51.347 "claimed": true, 00:15:51.347 "claim_type": "exclusive_write", 00:15:51.347 "zoned": false, 00:15:51.347 "supported_io_types": { 00:15:51.347 "read": true, 00:15:51.347 "write": true, 00:15:51.347 "unmap": true, 00:15:51.347 "flush": true, 00:15:51.347 "reset": true, 00:15:51.347 "nvme_admin": false, 00:15:51.347 "nvme_io": false, 00:15:51.347 "nvme_io_md": false, 00:15:51.347 "write_zeroes": true, 00:15:51.347 "zcopy": true, 00:15:51.347 "get_zone_info": false, 00:15:51.347 "zone_management": false, 00:15:51.347 "zone_append": false, 00:15:51.347 "compare": false, 00:15:51.347 "compare_and_write": false, 00:15:51.347 "abort": true, 00:15:51.347 "seek_hole": false, 00:15:51.347 "seek_data": false, 00:15:51.347 "copy": true, 00:15:51.347 "nvme_iov_md": false 00:15:51.347 }, 00:15:51.347 "memory_domains": [ 00:15:51.347 { 00:15:51.347 "dma_device_id": "system", 00:15:51.347 "dma_device_type": 1 00:15:51.347 }, 00:15:51.347 { 00:15:51.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.347 "dma_device_type": 2 00:15:51.347 } 00:15:51.347 ], 00:15:51.347 "driver_specific": {} 00:15:51.347 } 00:15:51.347 ] 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.347 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.348 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.348 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.348 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.607 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.607 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.607 "name": "Existed_Raid", 00:15:51.607 "uuid": "dfd586b1-8195-4984-ac31-caf33bba5166", 00:15:51.607 "strip_size_kb": 0, 00:15:51.607 "state": "online", 00:15:51.607 "raid_level": "raid1", 00:15:51.607 "superblock": true, 00:15:51.607 "num_base_bdevs": 2, 00:15:51.607 "num_base_bdevs_discovered": 2, 00:15:51.607 "num_base_bdevs_operational": 2, 00:15:51.607 "base_bdevs_list": [ 00:15:51.607 { 00:15:51.607 "name": "BaseBdev1", 00:15:51.607 "uuid": "b4cce992-cbf0-45d0-ab86-9c5fc8a2f0f5", 00:15:51.607 "is_configured": true, 00:15:51.607 "data_offset": 256, 00:15:51.607 "data_size": 7936 00:15:51.607 }, 00:15:51.607 { 00:15:51.607 "name": "BaseBdev2", 00:15:51.607 "uuid": "f63cf740-4945-4f33-af6f-f77ed29a59df", 00:15:51.607 "is_configured": true, 00:15:51.607 "data_offset": 256, 00:15:51.607 "data_size": 7936 00:15:51.607 } 00:15:51.607 ] 00:15:51.607 }' 00:15:51.607 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.607 03:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.867 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:51.867 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:51.867 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:51.867 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:51.867 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:51.867 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:51.867 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:51.867 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:51.867 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.867 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.867 [2024-11-18 03:15:55.335111] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.867 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.867 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:51.867 "name": "Existed_Raid", 00:15:51.867 "aliases": [ 00:15:51.867 "dfd586b1-8195-4984-ac31-caf33bba5166" 00:15:51.867 ], 00:15:51.867 "product_name": "Raid Volume", 00:15:51.867 "block_size": 4096, 00:15:51.867 "num_blocks": 7936, 00:15:51.867 "uuid": "dfd586b1-8195-4984-ac31-caf33bba5166", 00:15:51.867 "assigned_rate_limits": { 00:15:51.867 "rw_ios_per_sec": 0, 00:15:51.867 "rw_mbytes_per_sec": 0, 00:15:51.867 "r_mbytes_per_sec": 0, 00:15:51.867 "w_mbytes_per_sec": 0 00:15:51.867 }, 00:15:51.867 "claimed": false, 00:15:51.867 "zoned": false, 00:15:51.867 "supported_io_types": { 00:15:51.867 "read": true, 00:15:51.867 "write": true, 00:15:51.867 "unmap": false, 00:15:51.867 "flush": false, 00:15:51.867 "reset": true, 00:15:51.867 "nvme_admin": false, 00:15:51.867 "nvme_io": false, 00:15:51.867 "nvme_io_md": false, 00:15:51.867 "write_zeroes": true, 00:15:51.867 "zcopy": false, 00:15:51.867 "get_zone_info": false, 00:15:51.867 "zone_management": false, 00:15:51.867 "zone_append": false, 00:15:51.867 "compare": false, 00:15:51.867 "compare_and_write": false, 00:15:51.867 "abort": false, 00:15:51.867 "seek_hole": false, 00:15:51.867 "seek_data": false, 00:15:51.867 "copy": false, 00:15:51.867 "nvme_iov_md": false 00:15:51.867 }, 00:15:51.867 "memory_domains": [ 00:15:51.867 { 00:15:51.867 "dma_device_id": "system", 00:15:51.867 "dma_device_type": 1 00:15:51.867 }, 00:15:51.867 { 00:15:51.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.867 "dma_device_type": 2 00:15:51.867 }, 00:15:51.867 { 00:15:51.867 "dma_device_id": "system", 00:15:51.867 "dma_device_type": 1 00:15:51.867 }, 00:15:51.867 { 00:15:51.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.867 "dma_device_type": 2 00:15:51.867 } 00:15:51.867 ], 00:15:51.867 "driver_specific": { 00:15:51.867 "raid": { 00:15:51.867 "uuid": "dfd586b1-8195-4984-ac31-caf33bba5166", 00:15:51.867 "strip_size_kb": 0, 00:15:51.867 "state": "online", 00:15:51.867 "raid_level": "raid1", 00:15:51.867 "superblock": true, 00:15:51.867 "num_base_bdevs": 2, 00:15:51.867 "num_base_bdevs_discovered": 2, 00:15:51.867 "num_base_bdevs_operational": 2, 00:15:51.867 "base_bdevs_list": [ 00:15:51.867 { 00:15:51.867 "name": "BaseBdev1", 00:15:51.867 "uuid": "b4cce992-cbf0-45d0-ab86-9c5fc8a2f0f5", 00:15:51.867 "is_configured": true, 00:15:51.867 "data_offset": 256, 00:15:51.867 "data_size": 7936 00:15:51.867 }, 00:15:51.867 { 00:15:51.867 "name": "BaseBdev2", 00:15:51.867 "uuid": "f63cf740-4945-4f33-af6f-f77ed29a59df", 00:15:51.867 "is_configured": true, 00:15:51.867 "data_offset": 256, 00:15:51.867 "data_size": 7936 00:15:51.867 } 00:15:51.867 ] 00:15:51.867 } 00:15:51.867 } 00:15:51.867 }' 00:15:51.867 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:51.867 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:51.867 BaseBdev2' 00:15:51.867 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.867 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:51.867 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.867 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.867 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:51.868 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.868 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.127 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.127 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:52.127 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:52.127 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.127 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:52.127 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.128 [2024-11-18 03:15:55.514530] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.128 "name": "Existed_Raid", 00:15:52.128 "uuid": "dfd586b1-8195-4984-ac31-caf33bba5166", 00:15:52.128 "strip_size_kb": 0, 00:15:52.128 "state": "online", 00:15:52.128 "raid_level": "raid1", 00:15:52.128 "superblock": true, 00:15:52.128 "num_base_bdevs": 2, 00:15:52.128 "num_base_bdevs_discovered": 1, 00:15:52.128 "num_base_bdevs_operational": 1, 00:15:52.128 "base_bdevs_list": [ 00:15:52.128 { 00:15:52.128 "name": null, 00:15:52.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.128 "is_configured": false, 00:15:52.128 "data_offset": 0, 00:15:52.128 "data_size": 7936 00:15:52.128 }, 00:15:52.128 { 00:15:52.128 "name": "BaseBdev2", 00:15:52.128 "uuid": "f63cf740-4945-4f33-af6f-f77ed29a59df", 00:15:52.128 "is_configured": true, 00:15:52.128 "data_offset": 256, 00:15:52.128 "data_size": 7936 00:15:52.128 } 00:15:52.128 ] 00:15:52.128 }' 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.128 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.388 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:52.388 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:52.388 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.388 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.388 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.388 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:52.388 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.648 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:52.648 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:52.648 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:52.648 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.648 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.648 [2024-11-18 03:15:55.977239] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:52.648 [2024-11-18 03:15:55.977349] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:52.648 [2024-11-18 03:15:55.989149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.648 [2024-11-18 03:15:55.989270] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:52.649 [2024-11-18 03:15:55.989294] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:15:52.649 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.649 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:52.649 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:52.649 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.649 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.649 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.649 03:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:52.649 03:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.649 03:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:52.649 03:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:52.649 03:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:52.649 03:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 96429 00:15:52.649 03:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 96429 ']' 00:15:52.649 03:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 96429 00:15:52.649 03:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:15:52.649 03:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:52.649 03:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96429 00:15:52.649 03:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:52.649 03:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:52.649 03:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96429' 00:15:52.649 killing process with pid 96429 00:15:52.649 03:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 96429 00:15:52.649 [2024-11-18 03:15:56.080487] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:52.649 03:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 96429 00:15:52.649 [2024-11-18 03:15:56.081562] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:52.909 03:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:15:52.909 ************************************ 00:15:52.909 END TEST raid_state_function_test_sb_4k 00:15:52.909 ************************************ 00:15:52.909 00:15:52.909 real 0m3.704s 00:15:52.909 user 0m5.793s 00:15:52.909 sys 0m0.750s 00:15:52.909 03:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:52.909 03:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.909 03:15:56 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:15:52.909 03:15:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:52.909 03:15:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:52.909 03:15:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:52.909 ************************************ 00:15:52.909 START TEST raid_superblock_test_4k 00:15:52.909 ************************************ 00:15:52.909 03:15:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:15:52.909 03:15:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:52.909 03:15:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:52.909 03:15:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:52.909 03:15:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:52.909 03:15:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:52.909 03:15:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:52.909 03:15:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:52.909 03:15:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:52.909 03:15:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:52.909 03:15:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:52.909 03:15:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:52.909 03:15:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:52.909 03:15:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:52.909 03:15:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:52.909 03:15:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:52.909 03:15:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=96659 00:15:52.909 03:15:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 96659 00:15:52.909 03:15:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 96659 ']' 00:15:52.909 03:15:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.909 03:15:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:52.909 03:15:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:52.909 03:15:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.909 03:15:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:52.909 03:15:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.909 [2024-11-18 03:15:56.463653] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:52.910 [2024-11-18 03:15:56.463851] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96659 ] 00:15:53.169 [2024-11-18 03:15:56.626170] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.169 [2024-11-18 03:15:56.678735] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.169 [2024-11-18 03:15:56.721932] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.169 [2024-11-18 03:15:56.722045] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.739 03:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:53.739 03:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:15:53.739 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:53.739 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:53.739 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:53.739 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:53.739 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:53.739 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:53.739 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:53.739 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:53.739 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:15:53.739 03:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.739 03:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.739 malloc1 00:15:53.739 03:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.739 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:53.739 03:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.739 03:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.999 [2024-11-18 03:15:57.320745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:53.999 [2024-11-18 03:15:57.320882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.999 [2024-11-18 03:15:57.320923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:53.999 [2024-11-18 03:15:57.320973] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.999 [2024-11-18 03:15:57.323198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.999 [2024-11-18 03:15:57.323273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:53.999 pt1 00:15:53.999 03:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.999 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:53.999 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:53.999 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:53.999 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:53.999 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:53.999 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:53.999 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:53.999 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:53.999 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:15:53.999 03:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.999 03:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.999 malloc2 00:15:53.999 03:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.999 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:53.999 03:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.999 03:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.999 [2024-11-18 03:15:57.362967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:53.999 [2024-11-18 03:15:57.363071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.999 [2024-11-18 03:15:57.363092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:53.999 [2024-11-18 03:15:57.363103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.999 [2024-11-18 03:15:57.365232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.999 [2024-11-18 03:15:57.365271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:53.999 pt2 00:15:53.999 03:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.999 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:53.999 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:53.999 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:53.999 03:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.999 03:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.999 [2024-11-18 03:15:57.374999] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:53.999 [2024-11-18 03:15:57.376909] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:53.999 [2024-11-18 03:15:57.377138] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:53.999 [2024-11-18 03:15:57.377195] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:53.999 [2024-11-18 03:15:57.377486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:53.999 [2024-11-18 03:15:57.377650] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:53.999 [2024-11-18 03:15:57.377690] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:53.999 [2024-11-18 03:15:57.377871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.999 03:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.999 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:53.999 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.999 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.000 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.000 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.000 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:54.000 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.000 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.000 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.000 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.000 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.000 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.000 03:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.000 03:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.000 03:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.000 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.000 "name": "raid_bdev1", 00:15:54.000 "uuid": "bbc0a16a-f470-4488-8281-07f562338cf0", 00:15:54.000 "strip_size_kb": 0, 00:15:54.000 "state": "online", 00:15:54.000 "raid_level": "raid1", 00:15:54.000 "superblock": true, 00:15:54.000 "num_base_bdevs": 2, 00:15:54.000 "num_base_bdevs_discovered": 2, 00:15:54.000 "num_base_bdevs_operational": 2, 00:15:54.000 "base_bdevs_list": [ 00:15:54.000 { 00:15:54.000 "name": "pt1", 00:15:54.000 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:54.000 "is_configured": true, 00:15:54.000 "data_offset": 256, 00:15:54.000 "data_size": 7936 00:15:54.000 }, 00:15:54.000 { 00:15:54.000 "name": "pt2", 00:15:54.000 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:54.000 "is_configured": true, 00:15:54.000 "data_offset": 256, 00:15:54.000 "data_size": 7936 00:15:54.000 } 00:15:54.000 ] 00:15:54.000 }' 00:15:54.000 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.000 03:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.260 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:54.260 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:54.260 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:54.260 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:54.260 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:54.260 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:54.260 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:54.260 03:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.260 03:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.260 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:54.260 [2024-11-18 03:15:57.811244] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:54.260 03:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.520 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:54.520 "name": "raid_bdev1", 00:15:54.520 "aliases": [ 00:15:54.520 "bbc0a16a-f470-4488-8281-07f562338cf0" 00:15:54.520 ], 00:15:54.520 "product_name": "Raid Volume", 00:15:54.520 "block_size": 4096, 00:15:54.520 "num_blocks": 7936, 00:15:54.520 "uuid": "bbc0a16a-f470-4488-8281-07f562338cf0", 00:15:54.520 "assigned_rate_limits": { 00:15:54.520 "rw_ios_per_sec": 0, 00:15:54.520 "rw_mbytes_per_sec": 0, 00:15:54.520 "r_mbytes_per_sec": 0, 00:15:54.520 "w_mbytes_per_sec": 0 00:15:54.520 }, 00:15:54.520 "claimed": false, 00:15:54.520 "zoned": false, 00:15:54.520 "supported_io_types": { 00:15:54.520 "read": true, 00:15:54.520 "write": true, 00:15:54.520 "unmap": false, 00:15:54.520 "flush": false, 00:15:54.520 "reset": true, 00:15:54.520 "nvme_admin": false, 00:15:54.520 "nvme_io": false, 00:15:54.520 "nvme_io_md": false, 00:15:54.520 "write_zeroes": true, 00:15:54.520 "zcopy": false, 00:15:54.520 "get_zone_info": false, 00:15:54.520 "zone_management": false, 00:15:54.520 "zone_append": false, 00:15:54.520 "compare": false, 00:15:54.520 "compare_and_write": false, 00:15:54.520 "abort": false, 00:15:54.520 "seek_hole": false, 00:15:54.520 "seek_data": false, 00:15:54.520 "copy": false, 00:15:54.520 "nvme_iov_md": false 00:15:54.520 }, 00:15:54.520 "memory_domains": [ 00:15:54.520 { 00:15:54.520 "dma_device_id": "system", 00:15:54.520 "dma_device_type": 1 00:15:54.520 }, 00:15:54.520 { 00:15:54.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.520 "dma_device_type": 2 00:15:54.520 }, 00:15:54.520 { 00:15:54.520 "dma_device_id": "system", 00:15:54.520 "dma_device_type": 1 00:15:54.520 }, 00:15:54.520 { 00:15:54.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.520 "dma_device_type": 2 00:15:54.520 } 00:15:54.520 ], 00:15:54.520 "driver_specific": { 00:15:54.520 "raid": { 00:15:54.520 "uuid": "bbc0a16a-f470-4488-8281-07f562338cf0", 00:15:54.520 "strip_size_kb": 0, 00:15:54.520 "state": "online", 00:15:54.520 "raid_level": "raid1", 00:15:54.520 "superblock": true, 00:15:54.520 "num_base_bdevs": 2, 00:15:54.520 "num_base_bdevs_discovered": 2, 00:15:54.520 "num_base_bdevs_operational": 2, 00:15:54.520 "base_bdevs_list": [ 00:15:54.520 { 00:15:54.520 "name": "pt1", 00:15:54.520 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:54.520 "is_configured": true, 00:15:54.520 "data_offset": 256, 00:15:54.520 "data_size": 7936 00:15:54.520 }, 00:15:54.520 { 00:15:54.520 "name": "pt2", 00:15:54.520 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:54.520 "is_configured": true, 00:15:54.520 "data_offset": 256, 00:15:54.520 "data_size": 7936 00:15:54.520 } 00:15:54.520 ] 00:15:54.520 } 00:15:54.520 } 00:15:54.520 }' 00:15:54.520 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:54.520 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:54.520 pt2' 00:15:54.520 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.520 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:54.520 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.520 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.520 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:54.520 03:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.520 03:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.520 03:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.520 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:54.521 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:54.521 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.521 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:54.521 03:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.521 03:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.521 03:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.521 03:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.521 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:54.521 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:54.521 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:54.521 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:54.521 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.521 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.521 [2024-11-18 03:15:58.026799] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:54.521 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.521 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bbc0a16a-f470-4488-8281-07f562338cf0 00:15:54.521 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z bbc0a16a-f470-4488-8281-07f562338cf0 ']' 00:15:54.521 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:54.521 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.521 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.521 [2024-11-18 03:15:58.074465] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:54.521 [2024-11-18 03:15:58.074536] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:54.521 [2024-11-18 03:15:58.074643] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:54.521 [2024-11-18 03:15:58.074751] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:54.521 [2024-11-18 03:15:58.074851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:54.521 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.521 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.521 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.521 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.521 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:54.521 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.781 [2024-11-18 03:15:58.210281] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:54.781 [2024-11-18 03:15:58.212212] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:54.781 [2024-11-18 03:15:58.212302] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:54.781 [2024-11-18 03:15:58.212357] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:54.781 [2024-11-18 03:15:58.212377] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:54.781 [2024-11-18 03:15:58.212387] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:15:54.781 request: 00:15:54.781 { 00:15:54.781 "name": "raid_bdev1", 00:15:54.781 "raid_level": "raid1", 00:15:54.781 "base_bdevs": [ 00:15:54.781 "malloc1", 00:15:54.781 "malloc2" 00:15:54.781 ], 00:15:54.781 "superblock": false, 00:15:54.781 "method": "bdev_raid_create", 00:15:54.781 "req_id": 1 00:15:54.781 } 00:15:54.781 Got JSON-RPC error response 00:15:54.781 response: 00:15:54.781 { 00:15:54.781 "code": -17, 00:15:54.781 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:54.781 } 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:54.781 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.782 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.782 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.782 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:54.782 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.782 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:54.782 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:54.782 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:54.782 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.782 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.782 [2024-11-18 03:15:58.274119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:54.782 [2024-11-18 03:15:58.274231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.782 [2024-11-18 03:15:58.274276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:54.782 [2024-11-18 03:15:58.274307] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.782 [2024-11-18 03:15:58.276487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.782 [2024-11-18 03:15:58.276562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:54.782 [2024-11-18 03:15:58.276668] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:54.782 [2024-11-18 03:15:58.276738] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:54.782 pt1 00:15:54.782 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.782 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:54.782 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.782 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.782 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.782 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.782 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:54.782 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.782 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.782 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.782 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.782 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.782 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.782 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.782 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.782 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.782 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.782 "name": "raid_bdev1", 00:15:54.782 "uuid": "bbc0a16a-f470-4488-8281-07f562338cf0", 00:15:54.782 "strip_size_kb": 0, 00:15:54.782 "state": "configuring", 00:15:54.782 "raid_level": "raid1", 00:15:54.782 "superblock": true, 00:15:54.782 "num_base_bdevs": 2, 00:15:54.782 "num_base_bdevs_discovered": 1, 00:15:54.782 "num_base_bdevs_operational": 2, 00:15:54.782 "base_bdevs_list": [ 00:15:54.782 { 00:15:54.782 "name": "pt1", 00:15:54.782 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:54.782 "is_configured": true, 00:15:54.782 "data_offset": 256, 00:15:54.782 "data_size": 7936 00:15:54.782 }, 00:15:54.782 { 00:15:54.782 "name": null, 00:15:54.782 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:54.782 "is_configured": false, 00:15:54.782 "data_offset": 256, 00:15:54.782 "data_size": 7936 00:15:54.782 } 00:15:54.782 ] 00:15:54.782 }' 00:15:54.782 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.782 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.351 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:55.351 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:55.351 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:55.351 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:55.351 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.351 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.351 [2024-11-18 03:15:58.681454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:55.351 [2024-11-18 03:15:58.681574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.351 [2024-11-18 03:15:58.681618] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:55.351 [2024-11-18 03:15:58.681647] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.351 [2024-11-18 03:15:58.682101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.351 [2024-11-18 03:15:58.682156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:55.351 [2024-11-18 03:15:58.682258] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:55.351 [2024-11-18 03:15:58.682307] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:55.351 [2024-11-18 03:15:58.682419] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:55.351 [2024-11-18 03:15:58.682453] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:55.351 [2024-11-18 03:15:58.682703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:55.351 [2024-11-18 03:15:58.682866] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:55.351 [2024-11-18 03:15:58.682915] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:15:55.351 [2024-11-18 03:15:58.683078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.351 pt2 00:15:55.351 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.351 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:55.351 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:55.351 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:55.351 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.351 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.351 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.351 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.352 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:55.352 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.352 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.352 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.352 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.352 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.352 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.352 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.352 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.352 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.352 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.352 "name": "raid_bdev1", 00:15:55.352 "uuid": "bbc0a16a-f470-4488-8281-07f562338cf0", 00:15:55.352 "strip_size_kb": 0, 00:15:55.352 "state": "online", 00:15:55.352 "raid_level": "raid1", 00:15:55.352 "superblock": true, 00:15:55.352 "num_base_bdevs": 2, 00:15:55.352 "num_base_bdevs_discovered": 2, 00:15:55.352 "num_base_bdevs_operational": 2, 00:15:55.352 "base_bdevs_list": [ 00:15:55.352 { 00:15:55.352 "name": "pt1", 00:15:55.352 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:55.352 "is_configured": true, 00:15:55.352 "data_offset": 256, 00:15:55.352 "data_size": 7936 00:15:55.352 }, 00:15:55.352 { 00:15:55.352 "name": "pt2", 00:15:55.352 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.352 "is_configured": true, 00:15:55.352 "data_offset": 256, 00:15:55.352 "data_size": 7936 00:15:55.352 } 00:15:55.352 ] 00:15:55.352 }' 00:15:55.352 03:15:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.352 03:15:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.611 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:55.611 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:55.611 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:55.611 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:55.611 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:55.611 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:55.611 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:55.611 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:55.611 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.611 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.611 [2024-11-18 03:15:59.140936] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:55.611 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.611 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:55.611 "name": "raid_bdev1", 00:15:55.611 "aliases": [ 00:15:55.611 "bbc0a16a-f470-4488-8281-07f562338cf0" 00:15:55.611 ], 00:15:55.611 "product_name": "Raid Volume", 00:15:55.611 "block_size": 4096, 00:15:55.611 "num_blocks": 7936, 00:15:55.611 "uuid": "bbc0a16a-f470-4488-8281-07f562338cf0", 00:15:55.611 "assigned_rate_limits": { 00:15:55.611 "rw_ios_per_sec": 0, 00:15:55.611 "rw_mbytes_per_sec": 0, 00:15:55.611 "r_mbytes_per_sec": 0, 00:15:55.611 "w_mbytes_per_sec": 0 00:15:55.611 }, 00:15:55.611 "claimed": false, 00:15:55.611 "zoned": false, 00:15:55.611 "supported_io_types": { 00:15:55.611 "read": true, 00:15:55.611 "write": true, 00:15:55.611 "unmap": false, 00:15:55.611 "flush": false, 00:15:55.611 "reset": true, 00:15:55.611 "nvme_admin": false, 00:15:55.611 "nvme_io": false, 00:15:55.611 "nvme_io_md": false, 00:15:55.611 "write_zeroes": true, 00:15:55.611 "zcopy": false, 00:15:55.611 "get_zone_info": false, 00:15:55.611 "zone_management": false, 00:15:55.611 "zone_append": false, 00:15:55.611 "compare": false, 00:15:55.611 "compare_and_write": false, 00:15:55.611 "abort": false, 00:15:55.611 "seek_hole": false, 00:15:55.611 "seek_data": false, 00:15:55.611 "copy": false, 00:15:55.611 "nvme_iov_md": false 00:15:55.611 }, 00:15:55.611 "memory_domains": [ 00:15:55.611 { 00:15:55.611 "dma_device_id": "system", 00:15:55.611 "dma_device_type": 1 00:15:55.611 }, 00:15:55.611 { 00:15:55.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.611 "dma_device_type": 2 00:15:55.611 }, 00:15:55.611 { 00:15:55.611 "dma_device_id": "system", 00:15:55.611 "dma_device_type": 1 00:15:55.611 }, 00:15:55.611 { 00:15:55.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.612 "dma_device_type": 2 00:15:55.612 } 00:15:55.612 ], 00:15:55.612 "driver_specific": { 00:15:55.612 "raid": { 00:15:55.612 "uuid": "bbc0a16a-f470-4488-8281-07f562338cf0", 00:15:55.612 "strip_size_kb": 0, 00:15:55.612 "state": "online", 00:15:55.612 "raid_level": "raid1", 00:15:55.612 "superblock": true, 00:15:55.612 "num_base_bdevs": 2, 00:15:55.612 "num_base_bdevs_discovered": 2, 00:15:55.612 "num_base_bdevs_operational": 2, 00:15:55.612 "base_bdevs_list": [ 00:15:55.612 { 00:15:55.612 "name": "pt1", 00:15:55.612 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:55.612 "is_configured": true, 00:15:55.612 "data_offset": 256, 00:15:55.612 "data_size": 7936 00:15:55.612 }, 00:15:55.612 { 00:15:55.612 "name": "pt2", 00:15:55.612 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.612 "is_configured": true, 00:15:55.612 "data_offset": 256, 00:15:55.612 "data_size": 7936 00:15:55.612 } 00:15:55.612 ] 00:15:55.612 } 00:15:55.612 } 00:15:55.612 }' 00:15:55.612 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:55.870 pt2' 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:55.870 [2024-11-18 03:15:59.340604] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' bbc0a16a-f470-4488-8281-07f562338cf0 '!=' bbc0a16a-f470-4488-8281-07f562338cf0 ']' 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.870 [2024-11-18 03:15:59.388268] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:55.870 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.871 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:55.871 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.871 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.871 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.871 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.871 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:55.871 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.871 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.871 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.871 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.871 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.871 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.871 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.871 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.871 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.871 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.871 "name": "raid_bdev1", 00:15:55.871 "uuid": "bbc0a16a-f470-4488-8281-07f562338cf0", 00:15:55.871 "strip_size_kb": 0, 00:15:55.871 "state": "online", 00:15:55.871 "raid_level": "raid1", 00:15:55.871 "superblock": true, 00:15:55.871 "num_base_bdevs": 2, 00:15:55.871 "num_base_bdevs_discovered": 1, 00:15:55.871 "num_base_bdevs_operational": 1, 00:15:55.871 "base_bdevs_list": [ 00:15:55.871 { 00:15:55.871 "name": null, 00:15:55.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.871 "is_configured": false, 00:15:55.871 "data_offset": 0, 00:15:55.871 "data_size": 7936 00:15:55.871 }, 00:15:55.871 { 00:15:55.871 "name": "pt2", 00:15:55.871 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.871 "is_configured": true, 00:15:55.871 "data_offset": 256, 00:15:55.871 "data_size": 7936 00:15:55.871 } 00:15:55.871 ] 00:15:55.871 }' 00:15:55.871 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.871 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.439 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:56.439 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.439 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.439 [2024-11-18 03:15:59.767558] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:56.439 [2024-11-18 03:15:59.767589] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:56.439 [2024-11-18 03:15:59.767677] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.439 [2024-11-18 03:15:59.767725] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:56.439 [2024-11-18 03:15:59.767734] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:15:56.439 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.439 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.439 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:56.439 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.439 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.439 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.439 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:56.439 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:56.439 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:56.439 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:56.439 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:56.439 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.439 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.439 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.439 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:56.439 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:56.439 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:56.439 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:56.439 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:15:56.439 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:56.439 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.439 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.439 [2024-11-18 03:15:59.835437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:56.439 [2024-11-18 03:15:59.835539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.439 [2024-11-18 03:15:59.835577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:56.439 [2024-11-18 03:15:59.835606] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.440 [2024-11-18 03:15:59.837780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.440 [2024-11-18 03:15:59.837855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:56.440 [2024-11-18 03:15:59.837979] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:56.440 [2024-11-18 03:15:59.838031] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:56.440 [2024-11-18 03:15:59.838127] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:15:56.440 [2024-11-18 03:15:59.838161] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:56.440 [2024-11-18 03:15:59.838403] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:56.440 [2024-11-18 03:15:59.838558] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:15:56.440 [2024-11-18 03:15:59.838572] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:15:56.440 [2024-11-18 03:15:59.838680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.440 pt2 00:15:56.440 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.440 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:56.440 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.440 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.440 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.440 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.440 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:56.440 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.440 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.440 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.440 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.440 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.440 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.440 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.440 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.440 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.440 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.440 "name": "raid_bdev1", 00:15:56.440 "uuid": "bbc0a16a-f470-4488-8281-07f562338cf0", 00:15:56.440 "strip_size_kb": 0, 00:15:56.440 "state": "online", 00:15:56.440 "raid_level": "raid1", 00:15:56.440 "superblock": true, 00:15:56.440 "num_base_bdevs": 2, 00:15:56.440 "num_base_bdevs_discovered": 1, 00:15:56.440 "num_base_bdevs_operational": 1, 00:15:56.440 "base_bdevs_list": [ 00:15:56.440 { 00:15:56.440 "name": null, 00:15:56.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.440 "is_configured": false, 00:15:56.440 "data_offset": 256, 00:15:56.440 "data_size": 7936 00:15:56.440 }, 00:15:56.440 { 00:15:56.440 "name": "pt2", 00:15:56.440 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.440 "is_configured": true, 00:15:56.440 "data_offset": 256, 00:15:56.440 "data_size": 7936 00:15:56.440 } 00:15:56.440 ] 00:15:56.440 }' 00:15:56.440 03:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.440 03:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.699 03:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:56.699 03:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.699 03:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.699 [2024-11-18 03:16:00.230860] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:56.699 [2024-11-18 03:16:00.230931] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:56.699 [2024-11-18 03:16:00.231040] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.699 [2024-11-18 03:16:00.231105] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:56.699 [2024-11-18 03:16:00.231158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:15:56.699 03:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.699 03:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.699 03:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.699 03:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:56.699 03:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.699 03:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.959 03:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:56.959 03:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:56.959 03:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:56.959 03:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:56.959 03:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.959 03:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.959 [2024-11-18 03:16:00.286719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:56.959 [2024-11-18 03:16:00.286835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.959 [2024-11-18 03:16:00.286877] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:15:56.959 [2024-11-18 03:16:00.286930] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.959 [2024-11-18 03:16:00.289155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.959 [2024-11-18 03:16:00.289230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:56.959 [2024-11-18 03:16:00.289329] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:56.959 [2024-11-18 03:16:00.289391] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:56.959 [2024-11-18 03:16:00.289521] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:56.959 [2024-11-18 03:16:00.289586] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:56.959 [2024-11-18 03:16:00.289628] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:15:56.959 [2024-11-18 03:16:00.289709] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:56.959 [2024-11-18 03:16:00.289813] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:56.959 [2024-11-18 03:16:00.289853] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:56.959 pt1 00:15:56.959 [2024-11-18 03:16:00.290116] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:56.959 [2024-11-18 03:16:00.290236] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:56.959 [2024-11-18 03:16:00.290247] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:56.959 [2024-11-18 03:16:00.290358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.959 03:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.959 03:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:56.959 03:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:56.959 03:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.959 03:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.959 03:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.959 03:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.960 03:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:56.960 03:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.960 03:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.960 03:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.960 03:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.960 03:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.960 03:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.960 03:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.960 03:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.960 03:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.960 03:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.960 "name": "raid_bdev1", 00:15:56.960 "uuid": "bbc0a16a-f470-4488-8281-07f562338cf0", 00:15:56.960 "strip_size_kb": 0, 00:15:56.960 "state": "online", 00:15:56.960 "raid_level": "raid1", 00:15:56.960 "superblock": true, 00:15:56.960 "num_base_bdevs": 2, 00:15:56.960 "num_base_bdevs_discovered": 1, 00:15:56.960 "num_base_bdevs_operational": 1, 00:15:56.960 "base_bdevs_list": [ 00:15:56.960 { 00:15:56.960 "name": null, 00:15:56.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.960 "is_configured": false, 00:15:56.960 "data_offset": 256, 00:15:56.960 "data_size": 7936 00:15:56.960 }, 00:15:56.960 { 00:15:56.960 "name": "pt2", 00:15:56.960 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.960 "is_configured": true, 00:15:56.960 "data_offset": 256, 00:15:56.960 "data_size": 7936 00:15:56.960 } 00:15:56.960 ] 00:15:56.960 }' 00:15:56.960 03:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.960 03:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.219 03:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:57.219 03:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:57.219 03:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.219 03:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.219 03:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.479 03:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:57.479 03:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:57.479 03:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:57.479 03:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.479 03:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.479 [2024-11-18 03:16:00.830079] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:57.479 03:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.479 03:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' bbc0a16a-f470-4488-8281-07f562338cf0 '!=' bbc0a16a-f470-4488-8281-07f562338cf0 ']' 00:15:57.479 03:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 96659 00:15:57.479 03:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 96659 ']' 00:15:57.479 03:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 96659 00:15:57.479 03:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:15:57.479 03:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:57.479 03:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96659 00:15:57.479 03:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:57.479 03:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:57.479 03:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96659' 00:15:57.479 killing process with pid 96659 00:15:57.479 03:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 96659 00:15:57.479 [2024-11-18 03:16:00.912005] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:57.479 [2024-11-18 03:16:00.912094] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.479 03:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 96659 00:15:57.479 [2024-11-18 03:16:00.912182] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.479 [2024-11-18 03:16:00.912192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:57.479 [2024-11-18 03:16:00.935489] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:57.739 03:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:15:57.739 00:15:57.739 real 0m4.788s 00:15:57.739 user 0m7.825s 00:15:57.739 sys 0m0.976s 00:15:57.739 03:16:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:57.739 03:16:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.739 ************************************ 00:15:57.739 END TEST raid_superblock_test_4k 00:15:57.739 ************************************ 00:15:57.739 03:16:01 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:15:57.739 03:16:01 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:15:57.739 03:16:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:57.739 03:16:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:57.739 03:16:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:57.739 ************************************ 00:15:57.739 START TEST raid_rebuild_test_sb_4k 00:15:57.739 ************************************ 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:57.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=96974 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 96974 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 96974 ']' 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.739 03:16:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:57.999 [2024-11-18 03:16:01.322652] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:57.999 [2024-11-18 03:16:01.322882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96974 ] 00:15:57.999 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:57.999 Zero copy mechanism will not be used. 00:15:57.999 [2024-11-18 03:16:01.481320] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.999 [2024-11-18 03:16:01.531887] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.258 [2024-11-18 03:16:01.574635] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.258 [2024-11-18 03:16:01.574718] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.827 BaseBdev1_malloc 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.827 [2024-11-18 03:16:02.177217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:58.827 [2024-11-18 03:16:02.177326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.827 [2024-11-18 03:16:02.177390] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:58.827 [2024-11-18 03:16:02.177426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.827 [2024-11-18 03:16:02.179618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.827 [2024-11-18 03:16:02.179695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:58.827 BaseBdev1 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.827 BaseBdev2_malloc 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.827 [2024-11-18 03:16:02.211832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:58.827 [2024-11-18 03:16:02.211946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.827 [2024-11-18 03:16:02.212018] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:58.827 [2024-11-18 03:16:02.212053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.827 [2024-11-18 03:16:02.214150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.827 [2024-11-18 03:16:02.214220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:58.827 BaseBdev2 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.827 spare_malloc 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.827 spare_delay 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.827 [2024-11-18 03:16:02.248430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:58.827 [2024-11-18 03:16:02.248530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.827 [2024-11-18 03:16:02.248588] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:58.827 [2024-11-18 03:16:02.248618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.827 [2024-11-18 03:16:02.250765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.827 [2024-11-18 03:16:02.250833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:58.827 spare 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.827 [2024-11-18 03:16:02.260453] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:58.827 [2024-11-18 03:16:02.262316] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:58.827 [2024-11-18 03:16:02.262522] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:58.827 [2024-11-18 03:16:02.262558] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:58.827 [2024-11-18 03:16:02.262847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:58.827 [2024-11-18 03:16:02.263031] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:58.827 [2024-11-18 03:16:02.263078] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:58.827 [2024-11-18 03:16:02.263266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.827 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.828 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.828 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.828 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.828 "name": "raid_bdev1", 00:15:58.828 "uuid": "6a9d6866-2d7f-405e-a30c-cca26c58b950", 00:15:58.828 "strip_size_kb": 0, 00:15:58.828 "state": "online", 00:15:58.828 "raid_level": "raid1", 00:15:58.828 "superblock": true, 00:15:58.828 "num_base_bdevs": 2, 00:15:58.828 "num_base_bdevs_discovered": 2, 00:15:58.828 "num_base_bdevs_operational": 2, 00:15:58.828 "base_bdevs_list": [ 00:15:58.828 { 00:15:58.828 "name": "BaseBdev1", 00:15:58.828 "uuid": "27ee54db-a6f1-5346-b91a-ddd7d323ae92", 00:15:58.828 "is_configured": true, 00:15:58.828 "data_offset": 256, 00:15:58.828 "data_size": 7936 00:15:58.828 }, 00:15:58.828 { 00:15:58.828 "name": "BaseBdev2", 00:15:58.828 "uuid": "55b64e78-dfa7-54c0-9cdb-8cbe6403dd3a", 00:15:58.828 "is_configured": true, 00:15:58.828 "data_offset": 256, 00:15:58.828 "data_size": 7936 00:15:58.828 } 00:15:58.828 ] 00:15:58.828 }' 00:15:58.828 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.828 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.396 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:59.396 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:59.396 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.396 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.396 [2024-11-18 03:16:02.676060] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.396 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.396 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:15:59.396 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.396 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.396 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:59.396 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.396 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.396 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:15:59.396 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:59.396 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:59.396 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:59.396 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:59.396 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:59.396 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:59.396 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:59.396 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:59.396 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:59.396 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:15:59.396 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:59.396 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:59.396 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:59.396 [2024-11-18 03:16:02.955298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:59.396 /dev/nbd0 00:15:59.656 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:59.656 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:59.656 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:59.656 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:15:59.656 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:59.656 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:59.656 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:59.656 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:15:59.656 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:59.656 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:59.656 03:16:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:59.656 1+0 records in 00:15:59.656 1+0 records out 00:15:59.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000525245 s, 7.8 MB/s 00:15:59.656 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.656 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:15:59.656 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.656 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:59.656 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:15:59.656 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:59.656 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:59.656 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:59.656 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:59.656 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:00.225 7936+0 records in 00:16:00.225 7936+0 records out 00:16:00.225 32505856 bytes (33 MB, 31 MiB) copied, 0.517061 s, 62.9 MB/s 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:00.225 [2024-11-18 03:16:03.735260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.225 [2024-11-18 03:16:03.767302] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.225 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.484 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.484 "name": "raid_bdev1", 00:16:00.484 "uuid": "6a9d6866-2d7f-405e-a30c-cca26c58b950", 00:16:00.484 "strip_size_kb": 0, 00:16:00.484 "state": "online", 00:16:00.484 "raid_level": "raid1", 00:16:00.484 "superblock": true, 00:16:00.484 "num_base_bdevs": 2, 00:16:00.484 "num_base_bdevs_discovered": 1, 00:16:00.484 "num_base_bdevs_operational": 1, 00:16:00.484 "base_bdevs_list": [ 00:16:00.484 { 00:16:00.484 "name": null, 00:16:00.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.484 "is_configured": false, 00:16:00.484 "data_offset": 0, 00:16:00.484 "data_size": 7936 00:16:00.484 }, 00:16:00.484 { 00:16:00.484 "name": "BaseBdev2", 00:16:00.484 "uuid": "55b64e78-dfa7-54c0-9cdb-8cbe6403dd3a", 00:16:00.484 "is_configured": true, 00:16:00.484 "data_offset": 256, 00:16:00.485 "data_size": 7936 00:16:00.485 } 00:16:00.485 ] 00:16:00.485 }' 00:16:00.485 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.485 03:16:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.743 03:16:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:00.743 03:16:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.743 03:16:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.743 [2024-11-18 03:16:04.238531] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:00.743 [2024-11-18 03:16:04.242819] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:16:00.743 03:16:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.743 03:16:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:00.743 [2024-11-18 03:16:04.244791] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:01.682 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.682 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.682 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.682 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.682 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.682 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.682 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.682 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.682 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.942 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.942 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.942 "name": "raid_bdev1", 00:16:01.942 "uuid": "6a9d6866-2d7f-405e-a30c-cca26c58b950", 00:16:01.942 "strip_size_kb": 0, 00:16:01.942 "state": "online", 00:16:01.942 "raid_level": "raid1", 00:16:01.942 "superblock": true, 00:16:01.942 "num_base_bdevs": 2, 00:16:01.942 "num_base_bdevs_discovered": 2, 00:16:01.942 "num_base_bdevs_operational": 2, 00:16:01.942 "process": { 00:16:01.942 "type": "rebuild", 00:16:01.942 "target": "spare", 00:16:01.942 "progress": { 00:16:01.942 "blocks": 2560, 00:16:01.942 "percent": 32 00:16:01.942 } 00:16:01.942 }, 00:16:01.942 "base_bdevs_list": [ 00:16:01.942 { 00:16:01.942 "name": "spare", 00:16:01.942 "uuid": "90aa8996-2ed7-5420-87b2-9b727e0ee9dc", 00:16:01.942 "is_configured": true, 00:16:01.942 "data_offset": 256, 00:16:01.943 "data_size": 7936 00:16:01.943 }, 00:16:01.943 { 00:16:01.943 "name": "BaseBdev2", 00:16:01.943 "uuid": "55b64e78-dfa7-54c0-9cdb-8cbe6403dd3a", 00:16:01.943 "is_configured": true, 00:16:01.943 "data_offset": 256, 00:16:01.943 "data_size": 7936 00:16:01.943 } 00:16:01.943 ] 00:16:01.943 }' 00:16:01.943 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.943 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.943 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.943 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.943 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:01.943 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.943 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.943 [2024-11-18 03:16:05.369573] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:01.943 [2024-11-18 03:16:05.449982] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:01.943 [2024-11-18 03:16:05.450056] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.943 [2024-11-18 03:16:05.450078] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:01.943 [2024-11-18 03:16:05.450086] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:01.943 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.943 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:01.943 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.943 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.943 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.943 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.943 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:01.943 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.943 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.943 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.943 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.943 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.943 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.943 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.943 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.943 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.943 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.943 "name": "raid_bdev1", 00:16:01.943 "uuid": "6a9d6866-2d7f-405e-a30c-cca26c58b950", 00:16:01.943 "strip_size_kb": 0, 00:16:01.943 "state": "online", 00:16:01.943 "raid_level": "raid1", 00:16:01.943 "superblock": true, 00:16:01.943 "num_base_bdevs": 2, 00:16:01.943 "num_base_bdevs_discovered": 1, 00:16:01.943 "num_base_bdevs_operational": 1, 00:16:01.943 "base_bdevs_list": [ 00:16:01.943 { 00:16:01.943 "name": null, 00:16:01.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.943 "is_configured": false, 00:16:01.943 "data_offset": 0, 00:16:01.943 "data_size": 7936 00:16:01.943 }, 00:16:01.943 { 00:16:01.943 "name": "BaseBdev2", 00:16:01.943 "uuid": "55b64e78-dfa7-54c0-9cdb-8cbe6403dd3a", 00:16:01.943 "is_configured": true, 00:16:01.943 "data_offset": 256, 00:16:01.943 "data_size": 7936 00:16:01.943 } 00:16:01.943 ] 00:16:01.943 }' 00:16:01.943 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.943 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.513 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:02.513 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.513 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:02.513 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:02.513 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.513 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.513 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.513 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.513 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.513 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.513 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.513 "name": "raid_bdev1", 00:16:02.513 "uuid": "6a9d6866-2d7f-405e-a30c-cca26c58b950", 00:16:02.513 "strip_size_kb": 0, 00:16:02.513 "state": "online", 00:16:02.513 "raid_level": "raid1", 00:16:02.513 "superblock": true, 00:16:02.513 "num_base_bdevs": 2, 00:16:02.513 "num_base_bdevs_discovered": 1, 00:16:02.513 "num_base_bdevs_operational": 1, 00:16:02.513 "base_bdevs_list": [ 00:16:02.513 { 00:16:02.513 "name": null, 00:16:02.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.513 "is_configured": false, 00:16:02.513 "data_offset": 0, 00:16:02.513 "data_size": 7936 00:16:02.513 }, 00:16:02.513 { 00:16:02.513 "name": "BaseBdev2", 00:16:02.513 "uuid": "55b64e78-dfa7-54c0-9cdb-8cbe6403dd3a", 00:16:02.513 "is_configured": true, 00:16:02.513 "data_offset": 256, 00:16:02.513 "data_size": 7936 00:16:02.513 } 00:16:02.513 ] 00:16:02.513 }' 00:16:02.513 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.513 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:02.513 03:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.513 03:16:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:02.513 03:16:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:02.513 03:16:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.513 03:16:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.513 [2024-11-18 03:16:06.017937] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:02.513 [2024-11-18 03:16:06.022130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:16:02.513 03:16:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.513 03:16:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:02.513 [2024-11-18 03:16:06.024060] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:03.453 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.453 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.453 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.453 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.453 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.713 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.713 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.713 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.713 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.713 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.713 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.713 "name": "raid_bdev1", 00:16:03.713 "uuid": "6a9d6866-2d7f-405e-a30c-cca26c58b950", 00:16:03.713 "strip_size_kb": 0, 00:16:03.713 "state": "online", 00:16:03.713 "raid_level": "raid1", 00:16:03.713 "superblock": true, 00:16:03.713 "num_base_bdevs": 2, 00:16:03.713 "num_base_bdevs_discovered": 2, 00:16:03.713 "num_base_bdevs_operational": 2, 00:16:03.713 "process": { 00:16:03.713 "type": "rebuild", 00:16:03.713 "target": "spare", 00:16:03.713 "progress": { 00:16:03.713 "blocks": 2560, 00:16:03.713 "percent": 32 00:16:03.713 } 00:16:03.713 }, 00:16:03.713 "base_bdevs_list": [ 00:16:03.713 { 00:16:03.713 "name": "spare", 00:16:03.713 "uuid": "90aa8996-2ed7-5420-87b2-9b727e0ee9dc", 00:16:03.713 "is_configured": true, 00:16:03.713 "data_offset": 256, 00:16:03.713 "data_size": 7936 00:16:03.713 }, 00:16:03.713 { 00:16:03.713 "name": "BaseBdev2", 00:16:03.713 "uuid": "55b64e78-dfa7-54c0-9cdb-8cbe6403dd3a", 00:16:03.713 "is_configured": true, 00:16:03.713 "data_offset": 256, 00:16:03.713 "data_size": 7936 00:16:03.713 } 00:16:03.714 ] 00:16:03.714 }' 00:16:03.714 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.714 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.714 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.714 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.714 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:03.714 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:03.714 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:03.714 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:03.714 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:03.714 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:03.714 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=561 00:16:03.714 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:03.714 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.714 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.714 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.714 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.714 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.714 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.714 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.714 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.714 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.714 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.714 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.714 "name": "raid_bdev1", 00:16:03.714 "uuid": "6a9d6866-2d7f-405e-a30c-cca26c58b950", 00:16:03.714 "strip_size_kb": 0, 00:16:03.714 "state": "online", 00:16:03.714 "raid_level": "raid1", 00:16:03.714 "superblock": true, 00:16:03.714 "num_base_bdevs": 2, 00:16:03.714 "num_base_bdevs_discovered": 2, 00:16:03.714 "num_base_bdevs_operational": 2, 00:16:03.714 "process": { 00:16:03.714 "type": "rebuild", 00:16:03.714 "target": "spare", 00:16:03.714 "progress": { 00:16:03.714 "blocks": 2816, 00:16:03.714 "percent": 35 00:16:03.714 } 00:16:03.714 }, 00:16:03.714 "base_bdevs_list": [ 00:16:03.714 { 00:16:03.714 "name": "spare", 00:16:03.714 "uuid": "90aa8996-2ed7-5420-87b2-9b727e0ee9dc", 00:16:03.714 "is_configured": true, 00:16:03.714 "data_offset": 256, 00:16:03.714 "data_size": 7936 00:16:03.714 }, 00:16:03.714 { 00:16:03.714 "name": "BaseBdev2", 00:16:03.714 "uuid": "55b64e78-dfa7-54c0-9cdb-8cbe6403dd3a", 00:16:03.714 "is_configured": true, 00:16:03.714 "data_offset": 256, 00:16:03.714 "data_size": 7936 00:16:03.714 } 00:16:03.714 ] 00:16:03.714 }' 00:16:03.714 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.714 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.714 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.974 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.974 03:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:04.917 03:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:04.917 03:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.917 03:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.917 03:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.917 03:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.917 03:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.917 03:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.917 03:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.917 03:16:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.917 03:16:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.917 03:16:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.917 03:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.917 "name": "raid_bdev1", 00:16:04.917 "uuid": "6a9d6866-2d7f-405e-a30c-cca26c58b950", 00:16:04.917 "strip_size_kb": 0, 00:16:04.918 "state": "online", 00:16:04.918 "raid_level": "raid1", 00:16:04.918 "superblock": true, 00:16:04.918 "num_base_bdevs": 2, 00:16:04.918 "num_base_bdevs_discovered": 2, 00:16:04.918 "num_base_bdevs_operational": 2, 00:16:04.918 "process": { 00:16:04.918 "type": "rebuild", 00:16:04.918 "target": "spare", 00:16:04.918 "progress": { 00:16:04.918 "blocks": 5632, 00:16:04.918 "percent": 70 00:16:04.918 } 00:16:04.918 }, 00:16:04.918 "base_bdevs_list": [ 00:16:04.918 { 00:16:04.918 "name": "spare", 00:16:04.918 "uuid": "90aa8996-2ed7-5420-87b2-9b727e0ee9dc", 00:16:04.918 "is_configured": true, 00:16:04.918 "data_offset": 256, 00:16:04.918 "data_size": 7936 00:16:04.918 }, 00:16:04.918 { 00:16:04.918 "name": "BaseBdev2", 00:16:04.918 "uuid": "55b64e78-dfa7-54c0-9cdb-8cbe6403dd3a", 00:16:04.918 "is_configured": true, 00:16:04.918 "data_offset": 256, 00:16:04.918 "data_size": 7936 00:16:04.918 } 00:16:04.918 ] 00:16:04.918 }' 00:16:04.918 03:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.918 03:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.918 03:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.918 03:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.918 03:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:05.898 [2024-11-18 03:16:09.136114] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:05.898 [2024-11-18 03:16:09.136209] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:05.898 [2024-11-18 03:16:09.136315] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.898 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:05.898 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.898 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.898 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.898 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.898 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.898 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.898 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.898 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.898 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.898 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.898 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.898 "name": "raid_bdev1", 00:16:05.898 "uuid": "6a9d6866-2d7f-405e-a30c-cca26c58b950", 00:16:05.898 "strip_size_kb": 0, 00:16:05.898 "state": "online", 00:16:05.898 "raid_level": "raid1", 00:16:05.898 "superblock": true, 00:16:05.898 "num_base_bdevs": 2, 00:16:05.898 "num_base_bdevs_discovered": 2, 00:16:05.898 "num_base_bdevs_operational": 2, 00:16:05.898 "base_bdevs_list": [ 00:16:05.898 { 00:16:05.898 "name": "spare", 00:16:05.898 "uuid": "90aa8996-2ed7-5420-87b2-9b727e0ee9dc", 00:16:05.898 "is_configured": true, 00:16:05.898 "data_offset": 256, 00:16:05.898 "data_size": 7936 00:16:05.898 }, 00:16:05.898 { 00:16:05.898 "name": "BaseBdev2", 00:16:05.898 "uuid": "55b64e78-dfa7-54c0-9cdb-8cbe6403dd3a", 00:16:05.898 "is_configured": true, 00:16:05.898 "data_offset": 256, 00:16:05.898 "data_size": 7936 00:16:05.898 } 00:16:05.898 ] 00:16:05.898 }' 00:16:05.898 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.157 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.158 "name": "raid_bdev1", 00:16:06.158 "uuid": "6a9d6866-2d7f-405e-a30c-cca26c58b950", 00:16:06.158 "strip_size_kb": 0, 00:16:06.158 "state": "online", 00:16:06.158 "raid_level": "raid1", 00:16:06.158 "superblock": true, 00:16:06.158 "num_base_bdevs": 2, 00:16:06.158 "num_base_bdevs_discovered": 2, 00:16:06.158 "num_base_bdevs_operational": 2, 00:16:06.158 "base_bdevs_list": [ 00:16:06.158 { 00:16:06.158 "name": "spare", 00:16:06.158 "uuid": "90aa8996-2ed7-5420-87b2-9b727e0ee9dc", 00:16:06.158 "is_configured": true, 00:16:06.158 "data_offset": 256, 00:16:06.158 "data_size": 7936 00:16:06.158 }, 00:16:06.158 { 00:16:06.158 "name": "BaseBdev2", 00:16:06.158 "uuid": "55b64e78-dfa7-54c0-9cdb-8cbe6403dd3a", 00:16:06.158 "is_configured": true, 00:16:06.158 "data_offset": 256, 00:16:06.158 "data_size": 7936 00:16:06.158 } 00:16:06.158 ] 00:16:06.158 }' 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.158 "name": "raid_bdev1", 00:16:06.158 "uuid": "6a9d6866-2d7f-405e-a30c-cca26c58b950", 00:16:06.158 "strip_size_kb": 0, 00:16:06.158 "state": "online", 00:16:06.158 "raid_level": "raid1", 00:16:06.158 "superblock": true, 00:16:06.158 "num_base_bdevs": 2, 00:16:06.158 "num_base_bdevs_discovered": 2, 00:16:06.158 "num_base_bdevs_operational": 2, 00:16:06.158 "base_bdevs_list": [ 00:16:06.158 { 00:16:06.158 "name": "spare", 00:16:06.158 "uuid": "90aa8996-2ed7-5420-87b2-9b727e0ee9dc", 00:16:06.158 "is_configured": true, 00:16:06.158 "data_offset": 256, 00:16:06.158 "data_size": 7936 00:16:06.158 }, 00:16:06.158 { 00:16:06.158 "name": "BaseBdev2", 00:16:06.158 "uuid": "55b64e78-dfa7-54c0-9cdb-8cbe6403dd3a", 00:16:06.158 "is_configured": true, 00:16:06.158 "data_offset": 256, 00:16:06.158 "data_size": 7936 00:16:06.158 } 00:16:06.158 ] 00:16:06.158 }' 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.158 03:16:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.727 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:06.727 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.728 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.728 [2024-11-18 03:16:10.055191] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:06.728 [2024-11-18 03:16:10.055223] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:06.728 [2024-11-18 03:16:10.055311] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.728 [2024-11-18 03:16:10.055380] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.728 [2024-11-18 03:16:10.055393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:06.728 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.728 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.728 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:16:06.728 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.728 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.728 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.728 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:06.728 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:06.728 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:06.728 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:06.728 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:06.728 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:06.728 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:06.728 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:06.728 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:06.728 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:06.728 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:06.728 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:06.728 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:06.987 /dev/nbd0 00:16:06.987 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:06.987 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:06.987 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:06.987 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:16:06.987 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:06.987 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:06.987 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:06.987 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:16:06.987 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:06.987 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:06.987 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:06.987 1+0 records in 00:16:06.987 1+0 records out 00:16:06.987 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306709 s, 13.4 MB/s 00:16:06.987 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.987 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:16:06.987 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.987 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:06.987 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:16:06.988 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:06.988 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:06.988 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:06.988 /dev/nbd1 00:16:07.247 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:07.247 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:07.247 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:07.247 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:16:07.247 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:07.247 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:07.247 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:07.247 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:16:07.247 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:07.247 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:07.247 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:07.247 1+0 records in 00:16:07.247 1+0 records out 00:16:07.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379565 s, 10.8 MB/s 00:16:07.247 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.247 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:16:07.247 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.247 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:07.247 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:16:07.247 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:07.248 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:07.248 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:07.248 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:07.248 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:07.248 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:07.248 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:07.248 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:07.248 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:07.248 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:07.507 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:07.507 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:07.507 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:07.507 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.507 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.507 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:07.507 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:07.507 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.507 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:07.507 03:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:07.767 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:07.767 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:07.767 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:07.767 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.767 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.767 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:07.767 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:07.767 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.767 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:07.767 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:07.767 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.767 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.767 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.767 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:07.767 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.767 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.767 [2024-11-18 03:16:11.113225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:07.767 [2024-11-18 03:16:11.113283] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.767 [2024-11-18 03:16:11.113304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:07.767 [2024-11-18 03:16:11.113317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.767 [2024-11-18 03:16:11.115520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.767 [2024-11-18 03:16:11.115558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:07.767 [2024-11-18 03:16:11.115642] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:07.767 [2024-11-18 03:16:11.115692] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.767 [2024-11-18 03:16:11.115822] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:07.767 spare 00:16:07.767 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.767 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:07.767 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.767 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.767 [2024-11-18 03:16:11.215737] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:16:07.767 [2024-11-18 03:16:11.215779] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:07.767 [2024-11-18 03:16:11.216114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:16:07.767 [2024-11-18 03:16:11.216277] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:16:07.767 [2024-11-18 03:16:11.216299] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:16:07.767 [2024-11-18 03:16:11.216465] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.768 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.768 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:07.768 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.768 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.768 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.768 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.768 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:07.768 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.768 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.768 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.768 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.768 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.768 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.768 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.768 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.768 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.768 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.768 "name": "raid_bdev1", 00:16:07.768 "uuid": "6a9d6866-2d7f-405e-a30c-cca26c58b950", 00:16:07.768 "strip_size_kb": 0, 00:16:07.768 "state": "online", 00:16:07.768 "raid_level": "raid1", 00:16:07.768 "superblock": true, 00:16:07.768 "num_base_bdevs": 2, 00:16:07.768 "num_base_bdevs_discovered": 2, 00:16:07.768 "num_base_bdevs_operational": 2, 00:16:07.768 "base_bdevs_list": [ 00:16:07.768 { 00:16:07.768 "name": "spare", 00:16:07.768 "uuid": "90aa8996-2ed7-5420-87b2-9b727e0ee9dc", 00:16:07.768 "is_configured": true, 00:16:07.768 "data_offset": 256, 00:16:07.768 "data_size": 7936 00:16:07.768 }, 00:16:07.768 { 00:16:07.768 "name": "BaseBdev2", 00:16:07.768 "uuid": "55b64e78-dfa7-54c0-9cdb-8cbe6403dd3a", 00:16:07.768 "is_configured": true, 00:16:07.768 "data_offset": 256, 00:16:07.768 "data_size": 7936 00:16:07.768 } 00:16:07.768 ] 00:16:07.768 }' 00:16:07.768 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.768 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.337 "name": "raid_bdev1", 00:16:08.337 "uuid": "6a9d6866-2d7f-405e-a30c-cca26c58b950", 00:16:08.337 "strip_size_kb": 0, 00:16:08.337 "state": "online", 00:16:08.337 "raid_level": "raid1", 00:16:08.337 "superblock": true, 00:16:08.337 "num_base_bdevs": 2, 00:16:08.337 "num_base_bdevs_discovered": 2, 00:16:08.337 "num_base_bdevs_operational": 2, 00:16:08.337 "base_bdevs_list": [ 00:16:08.337 { 00:16:08.337 "name": "spare", 00:16:08.337 "uuid": "90aa8996-2ed7-5420-87b2-9b727e0ee9dc", 00:16:08.337 "is_configured": true, 00:16:08.337 "data_offset": 256, 00:16:08.337 "data_size": 7936 00:16:08.337 }, 00:16:08.337 { 00:16:08.337 "name": "BaseBdev2", 00:16:08.337 "uuid": "55b64e78-dfa7-54c0-9cdb-8cbe6403dd3a", 00:16:08.337 "is_configured": true, 00:16:08.337 "data_offset": 256, 00:16:08.337 "data_size": 7936 00:16:08.337 } 00:16:08.337 ] 00:16:08.337 }' 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.337 [2024-11-18 03:16:11.784124] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.337 "name": "raid_bdev1", 00:16:08.337 "uuid": "6a9d6866-2d7f-405e-a30c-cca26c58b950", 00:16:08.337 "strip_size_kb": 0, 00:16:08.337 "state": "online", 00:16:08.337 "raid_level": "raid1", 00:16:08.337 "superblock": true, 00:16:08.337 "num_base_bdevs": 2, 00:16:08.337 "num_base_bdevs_discovered": 1, 00:16:08.337 "num_base_bdevs_operational": 1, 00:16:08.337 "base_bdevs_list": [ 00:16:08.337 { 00:16:08.337 "name": null, 00:16:08.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.337 "is_configured": false, 00:16:08.337 "data_offset": 0, 00:16:08.337 "data_size": 7936 00:16:08.337 }, 00:16:08.337 { 00:16:08.337 "name": "BaseBdev2", 00:16:08.337 "uuid": "55b64e78-dfa7-54c0-9cdb-8cbe6403dd3a", 00:16:08.337 "is_configured": true, 00:16:08.337 "data_offset": 256, 00:16:08.337 "data_size": 7936 00:16:08.337 } 00:16:08.337 ] 00:16:08.337 }' 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.337 03:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.907 03:16:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:08.907 03:16:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.907 03:16:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.907 [2024-11-18 03:16:12.247351] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:08.907 [2024-11-18 03:16:12.247566] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:08.907 [2024-11-18 03:16:12.247591] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:08.907 [2024-11-18 03:16:12.247634] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:08.907 [2024-11-18 03:16:12.251703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:16:08.907 03:16:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.907 03:16:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:08.907 [2024-11-18 03:16:12.253671] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:09.846 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.847 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.847 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.847 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.847 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.847 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.847 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.847 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.847 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.847 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.847 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.847 "name": "raid_bdev1", 00:16:09.847 "uuid": "6a9d6866-2d7f-405e-a30c-cca26c58b950", 00:16:09.847 "strip_size_kb": 0, 00:16:09.847 "state": "online", 00:16:09.847 "raid_level": "raid1", 00:16:09.847 "superblock": true, 00:16:09.847 "num_base_bdevs": 2, 00:16:09.847 "num_base_bdevs_discovered": 2, 00:16:09.847 "num_base_bdevs_operational": 2, 00:16:09.847 "process": { 00:16:09.847 "type": "rebuild", 00:16:09.847 "target": "spare", 00:16:09.847 "progress": { 00:16:09.847 "blocks": 2560, 00:16:09.847 "percent": 32 00:16:09.847 } 00:16:09.847 }, 00:16:09.847 "base_bdevs_list": [ 00:16:09.847 { 00:16:09.847 "name": "spare", 00:16:09.847 "uuid": "90aa8996-2ed7-5420-87b2-9b727e0ee9dc", 00:16:09.847 "is_configured": true, 00:16:09.847 "data_offset": 256, 00:16:09.847 "data_size": 7936 00:16:09.847 }, 00:16:09.847 { 00:16:09.847 "name": "BaseBdev2", 00:16:09.847 "uuid": "55b64e78-dfa7-54c0-9cdb-8cbe6403dd3a", 00:16:09.847 "is_configured": true, 00:16:09.847 "data_offset": 256, 00:16:09.847 "data_size": 7936 00:16:09.847 } 00:16:09.847 ] 00:16:09.847 }' 00:16:09.847 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.847 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.847 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.847 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.847 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:09.847 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.847 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.847 [2024-11-18 03:16:13.415017] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:10.106 [2024-11-18 03:16:13.458308] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:10.106 [2024-11-18 03:16:13.458385] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.106 [2024-11-18 03:16:13.458402] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:10.106 [2024-11-18 03:16:13.458410] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:10.106 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.106 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:10.106 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.106 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.106 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.106 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.106 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:10.107 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.107 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.107 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.107 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.107 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.107 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.107 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.107 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.107 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.107 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.107 "name": "raid_bdev1", 00:16:10.107 "uuid": "6a9d6866-2d7f-405e-a30c-cca26c58b950", 00:16:10.107 "strip_size_kb": 0, 00:16:10.107 "state": "online", 00:16:10.107 "raid_level": "raid1", 00:16:10.107 "superblock": true, 00:16:10.107 "num_base_bdevs": 2, 00:16:10.107 "num_base_bdevs_discovered": 1, 00:16:10.107 "num_base_bdevs_operational": 1, 00:16:10.107 "base_bdevs_list": [ 00:16:10.107 { 00:16:10.107 "name": null, 00:16:10.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.107 "is_configured": false, 00:16:10.107 "data_offset": 0, 00:16:10.107 "data_size": 7936 00:16:10.107 }, 00:16:10.107 { 00:16:10.107 "name": "BaseBdev2", 00:16:10.107 "uuid": "55b64e78-dfa7-54c0-9cdb-8cbe6403dd3a", 00:16:10.107 "is_configured": true, 00:16:10.107 "data_offset": 256, 00:16:10.107 "data_size": 7936 00:16:10.107 } 00:16:10.107 ] 00:16:10.107 }' 00:16:10.107 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.107 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.367 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:10.367 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.367 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.367 [2024-11-18 03:16:13.906088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:10.367 [2024-11-18 03:16:13.906159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.367 [2024-11-18 03:16:13.906192] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:10.367 [2024-11-18 03:16:13.906202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.367 [2024-11-18 03:16:13.906680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.367 [2024-11-18 03:16:13.906710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:10.367 [2024-11-18 03:16:13.906815] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:10.367 [2024-11-18 03:16:13.906832] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:10.367 [2024-11-18 03:16:13.906858] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:10.367 [2024-11-18 03:16:13.906885] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:10.367 [2024-11-18 03:16:13.911046] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:16:10.367 spare 00:16:10.367 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.367 03:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:10.367 [2024-11-18 03:16:13.913130] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:11.748 03:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.748 03:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.748 03:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.748 03:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.748 03:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.748 03:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.748 03:16:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.748 03:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.748 03:16:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.748 03:16:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.748 03:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.748 "name": "raid_bdev1", 00:16:11.748 "uuid": "6a9d6866-2d7f-405e-a30c-cca26c58b950", 00:16:11.748 "strip_size_kb": 0, 00:16:11.748 "state": "online", 00:16:11.748 "raid_level": "raid1", 00:16:11.748 "superblock": true, 00:16:11.748 "num_base_bdevs": 2, 00:16:11.748 "num_base_bdevs_discovered": 2, 00:16:11.748 "num_base_bdevs_operational": 2, 00:16:11.748 "process": { 00:16:11.748 "type": "rebuild", 00:16:11.748 "target": "spare", 00:16:11.748 "progress": { 00:16:11.748 "blocks": 2560, 00:16:11.748 "percent": 32 00:16:11.748 } 00:16:11.748 }, 00:16:11.748 "base_bdevs_list": [ 00:16:11.748 { 00:16:11.748 "name": "spare", 00:16:11.748 "uuid": "90aa8996-2ed7-5420-87b2-9b727e0ee9dc", 00:16:11.748 "is_configured": true, 00:16:11.748 "data_offset": 256, 00:16:11.748 "data_size": 7936 00:16:11.748 }, 00:16:11.748 { 00:16:11.748 "name": "BaseBdev2", 00:16:11.748 "uuid": "55b64e78-dfa7-54c0-9cdb-8cbe6403dd3a", 00:16:11.748 "is_configured": true, 00:16:11.748 "data_offset": 256, 00:16:11.748 "data_size": 7936 00:16:11.748 } 00:16:11.748 ] 00:16:11.748 }' 00:16:11.748 03:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.748 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.748 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.748 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.748 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:11.748 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.748 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.748 [2024-11-18 03:16:15.072788] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:11.748 [2024-11-18 03:16:15.117655] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:11.748 [2024-11-18 03:16:15.117727] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.748 [2024-11-18 03:16:15.117740] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:11.748 [2024-11-18 03:16:15.117749] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:11.748 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.748 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:11.748 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.748 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.748 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.748 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.748 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:11.748 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.748 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.748 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.748 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.748 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.748 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.748 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.748 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.748 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.748 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.748 "name": "raid_bdev1", 00:16:11.748 "uuid": "6a9d6866-2d7f-405e-a30c-cca26c58b950", 00:16:11.748 "strip_size_kb": 0, 00:16:11.748 "state": "online", 00:16:11.748 "raid_level": "raid1", 00:16:11.748 "superblock": true, 00:16:11.748 "num_base_bdevs": 2, 00:16:11.748 "num_base_bdevs_discovered": 1, 00:16:11.748 "num_base_bdevs_operational": 1, 00:16:11.748 "base_bdevs_list": [ 00:16:11.748 { 00:16:11.748 "name": null, 00:16:11.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.748 "is_configured": false, 00:16:11.748 "data_offset": 0, 00:16:11.748 "data_size": 7936 00:16:11.748 }, 00:16:11.748 { 00:16:11.748 "name": "BaseBdev2", 00:16:11.748 "uuid": "55b64e78-dfa7-54c0-9cdb-8cbe6403dd3a", 00:16:11.748 "is_configured": true, 00:16:11.749 "data_offset": 256, 00:16:11.749 "data_size": 7936 00:16:11.749 } 00:16:11.749 ] 00:16:11.749 }' 00:16:11.749 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.749 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.008 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:12.008 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.008 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:12.008 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:12.008 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.008 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.008 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.008 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.008 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.009 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.268 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.268 "name": "raid_bdev1", 00:16:12.268 "uuid": "6a9d6866-2d7f-405e-a30c-cca26c58b950", 00:16:12.268 "strip_size_kb": 0, 00:16:12.268 "state": "online", 00:16:12.268 "raid_level": "raid1", 00:16:12.268 "superblock": true, 00:16:12.268 "num_base_bdevs": 2, 00:16:12.268 "num_base_bdevs_discovered": 1, 00:16:12.268 "num_base_bdevs_operational": 1, 00:16:12.268 "base_bdevs_list": [ 00:16:12.268 { 00:16:12.269 "name": null, 00:16:12.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.269 "is_configured": false, 00:16:12.269 "data_offset": 0, 00:16:12.269 "data_size": 7936 00:16:12.269 }, 00:16:12.269 { 00:16:12.269 "name": "BaseBdev2", 00:16:12.269 "uuid": "55b64e78-dfa7-54c0-9cdb-8cbe6403dd3a", 00:16:12.269 "is_configured": true, 00:16:12.269 "data_offset": 256, 00:16:12.269 "data_size": 7936 00:16:12.269 } 00:16:12.269 ] 00:16:12.269 }' 00:16:12.269 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.269 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:12.269 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.269 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:12.269 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:12.269 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.269 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.269 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.269 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:12.269 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.269 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.269 [2024-11-18 03:16:15.721239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:12.269 [2024-11-18 03:16:15.721306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:12.269 [2024-11-18 03:16:15.721330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:12.269 [2024-11-18 03:16:15.721340] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:12.269 [2024-11-18 03:16:15.721729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:12.269 [2024-11-18 03:16:15.721748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:12.269 [2024-11-18 03:16:15.721820] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:12.269 [2024-11-18 03:16:15.721837] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:12.269 [2024-11-18 03:16:15.721845] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:12.269 [2024-11-18 03:16:15.721856] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:12.269 BaseBdev1 00:16:12.269 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.269 03:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:13.209 03:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:13.209 03:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.209 03:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.209 03:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:13.209 03:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:13.209 03:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:13.209 03:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.209 03:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.209 03:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.209 03:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.209 03:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.209 03:16:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.209 03:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.209 03:16:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.209 03:16:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.469 03:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.469 "name": "raid_bdev1", 00:16:13.469 "uuid": "6a9d6866-2d7f-405e-a30c-cca26c58b950", 00:16:13.469 "strip_size_kb": 0, 00:16:13.469 "state": "online", 00:16:13.469 "raid_level": "raid1", 00:16:13.469 "superblock": true, 00:16:13.469 "num_base_bdevs": 2, 00:16:13.469 "num_base_bdevs_discovered": 1, 00:16:13.469 "num_base_bdevs_operational": 1, 00:16:13.469 "base_bdevs_list": [ 00:16:13.469 { 00:16:13.469 "name": null, 00:16:13.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.469 "is_configured": false, 00:16:13.469 "data_offset": 0, 00:16:13.469 "data_size": 7936 00:16:13.469 }, 00:16:13.469 { 00:16:13.469 "name": "BaseBdev2", 00:16:13.469 "uuid": "55b64e78-dfa7-54c0-9cdb-8cbe6403dd3a", 00:16:13.469 "is_configured": true, 00:16:13.469 "data_offset": 256, 00:16:13.469 "data_size": 7936 00:16:13.469 } 00:16:13.469 ] 00:16:13.469 }' 00:16:13.469 03:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.469 03:16:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.729 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:13.729 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.729 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:13.729 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:13.729 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.729 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.729 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.729 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.729 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.729 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.729 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.729 "name": "raid_bdev1", 00:16:13.729 "uuid": "6a9d6866-2d7f-405e-a30c-cca26c58b950", 00:16:13.729 "strip_size_kb": 0, 00:16:13.729 "state": "online", 00:16:13.729 "raid_level": "raid1", 00:16:13.729 "superblock": true, 00:16:13.729 "num_base_bdevs": 2, 00:16:13.729 "num_base_bdevs_discovered": 1, 00:16:13.729 "num_base_bdevs_operational": 1, 00:16:13.729 "base_bdevs_list": [ 00:16:13.729 { 00:16:13.729 "name": null, 00:16:13.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.729 "is_configured": false, 00:16:13.729 "data_offset": 0, 00:16:13.729 "data_size": 7936 00:16:13.729 }, 00:16:13.729 { 00:16:13.729 "name": "BaseBdev2", 00:16:13.729 "uuid": "55b64e78-dfa7-54c0-9cdb-8cbe6403dd3a", 00:16:13.729 "is_configured": true, 00:16:13.729 "data_offset": 256, 00:16:13.729 "data_size": 7936 00:16:13.729 } 00:16:13.729 ] 00:16:13.729 }' 00:16:13.729 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.729 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:13.729 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.987 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:13.987 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:13.987 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:16:13.987 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:13.987 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:13.987 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:13.987 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:13.987 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:13.987 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:13.987 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.987 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.987 [2024-11-18 03:16:17.326509] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:13.987 [2024-11-18 03:16:17.326673] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:13.987 [2024-11-18 03:16:17.326685] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:13.987 request: 00:16:13.987 { 00:16:13.987 "base_bdev": "BaseBdev1", 00:16:13.987 "raid_bdev": "raid_bdev1", 00:16:13.987 "method": "bdev_raid_add_base_bdev", 00:16:13.987 "req_id": 1 00:16:13.987 } 00:16:13.987 Got JSON-RPC error response 00:16:13.987 response: 00:16:13.987 { 00:16:13.987 "code": -22, 00:16:13.987 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:13.987 } 00:16:13.987 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:13.987 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:16:13.987 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:13.987 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:13.987 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:13.987 03:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:14.923 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:14.924 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.924 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.924 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:14.924 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:14.924 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:14.924 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.924 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.924 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.924 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.924 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.924 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.924 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.924 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:14.924 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.924 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.924 "name": "raid_bdev1", 00:16:14.924 "uuid": "6a9d6866-2d7f-405e-a30c-cca26c58b950", 00:16:14.924 "strip_size_kb": 0, 00:16:14.924 "state": "online", 00:16:14.924 "raid_level": "raid1", 00:16:14.924 "superblock": true, 00:16:14.924 "num_base_bdevs": 2, 00:16:14.924 "num_base_bdevs_discovered": 1, 00:16:14.924 "num_base_bdevs_operational": 1, 00:16:14.924 "base_bdevs_list": [ 00:16:14.924 { 00:16:14.924 "name": null, 00:16:14.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.924 "is_configured": false, 00:16:14.924 "data_offset": 0, 00:16:14.924 "data_size": 7936 00:16:14.924 }, 00:16:14.924 { 00:16:14.924 "name": "BaseBdev2", 00:16:14.924 "uuid": "55b64e78-dfa7-54c0-9cdb-8cbe6403dd3a", 00:16:14.924 "is_configured": true, 00:16:14.924 "data_offset": 256, 00:16:14.924 "data_size": 7936 00:16:14.924 } 00:16:14.924 ] 00:16:14.924 }' 00:16:14.924 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.924 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.494 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:15.494 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.494 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:15.494 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:15.494 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.494 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.494 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.494 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.494 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.494 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.494 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.494 "name": "raid_bdev1", 00:16:15.494 "uuid": "6a9d6866-2d7f-405e-a30c-cca26c58b950", 00:16:15.494 "strip_size_kb": 0, 00:16:15.494 "state": "online", 00:16:15.494 "raid_level": "raid1", 00:16:15.494 "superblock": true, 00:16:15.494 "num_base_bdevs": 2, 00:16:15.494 "num_base_bdevs_discovered": 1, 00:16:15.494 "num_base_bdevs_operational": 1, 00:16:15.494 "base_bdevs_list": [ 00:16:15.494 { 00:16:15.494 "name": null, 00:16:15.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.494 "is_configured": false, 00:16:15.494 "data_offset": 0, 00:16:15.494 "data_size": 7936 00:16:15.494 }, 00:16:15.494 { 00:16:15.494 "name": "BaseBdev2", 00:16:15.494 "uuid": "55b64e78-dfa7-54c0-9cdb-8cbe6403dd3a", 00:16:15.494 "is_configured": true, 00:16:15.494 "data_offset": 256, 00:16:15.494 "data_size": 7936 00:16:15.494 } 00:16:15.494 ] 00:16:15.494 }' 00:16:15.494 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.494 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:15.494 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.494 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:15.494 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 96974 00:16:15.494 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 96974 ']' 00:16:15.494 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 96974 00:16:15.494 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:16:15.494 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:15.494 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96974 00:16:15.494 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:15.494 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:15.494 killing process with pid 96974 00:16:15.494 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96974' 00:16:15.494 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 96974 00:16:15.494 Received shutdown signal, test time was about 60.000000 seconds 00:16:15.494 00:16:15.494 Latency(us) 00:16:15.494 [2024-11-18T03:16:19.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:15.494 [2024-11-18T03:16:19.071Z] =================================================================================================================== 00:16:15.494 [2024-11-18T03:16:19.071Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:15.494 [2024-11-18 03:16:19.000052] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:15.494 [2024-11-18 03:16:19.000186] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:15.494 [2024-11-18 03:16:19.000254] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:15.494 [2024-11-18 03:16:19.000265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:16:15.494 03:16:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 96974 00:16:15.494 [2024-11-18 03:16:19.032195] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:15.754 03:16:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:16:15.754 00:16:15.754 real 0m18.024s 00:16:15.754 user 0m23.951s 00:16:15.754 sys 0m2.419s 00:16:15.754 03:16:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:15.754 03:16:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.754 ************************************ 00:16:15.754 END TEST raid_rebuild_test_sb_4k 00:16:15.754 ************************************ 00:16:15.754 03:16:19 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:16:15.754 03:16:19 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:16:15.754 03:16:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:15.754 03:16:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:15.754 03:16:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:15.754 ************************************ 00:16:15.754 START TEST raid_state_function_test_sb_md_separate 00:16:15.754 ************************************ 00:16:15.754 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:16:15.754 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:15.754 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:15.754 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:16.013 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:16.013 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:16.013 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:16.013 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:16.013 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:16.013 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:16.013 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:16.013 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:16.013 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:16.013 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:16.013 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:16.013 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:16.013 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:16.013 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:16.013 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:16.013 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:16.013 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:16.013 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:16.013 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:16.013 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=97653 00:16:16.014 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:16.014 Process raid pid: 97653 00:16:16.014 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 97653' 00:16:16.014 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 97653 00:16:16.014 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97653 ']' 00:16:16.014 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.014 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:16.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.014 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.014 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:16.014 03:16:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.014 [2024-11-18 03:16:19.421112] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:16.014 [2024-11-18 03:16:19.421237] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.014 [2024-11-18 03:16:19.585040] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.273 [2024-11-18 03:16:19.635105] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.273 [2024-11-18 03:16:19.677292] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.273 [2024-11-18 03:16:19.677344] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.842 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:16.842 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:16:16.842 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:16.842 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.842 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.842 [2024-11-18 03:16:20.266771] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:16.842 [2024-11-18 03:16:20.266820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:16.842 [2024-11-18 03:16:20.266833] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:16.842 [2024-11-18 03:16:20.266843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:16.842 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.842 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:16.842 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:16.842 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:16.842 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:16.842 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:16.842 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:16.842 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.842 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.842 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.842 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.842 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.842 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.842 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.842 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.843 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.843 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.843 "name": "Existed_Raid", 00:16:16.843 "uuid": "2c2037ce-d664-4fe0-8de1-f25fc7dbbce3", 00:16:16.843 "strip_size_kb": 0, 00:16:16.843 "state": "configuring", 00:16:16.843 "raid_level": "raid1", 00:16:16.843 "superblock": true, 00:16:16.843 "num_base_bdevs": 2, 00:16:16.843 "num_base_bdevs_discovered": 0, 00:16:16.843 "num_base_bdevs_operational": 2, 00:16:16.843 "base_bdevs_list": [ 00:16:16.843 { 00:16:16.843 "name": "BaseBdev1", 00:16:16.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.843 "is_configured": false, 00:16:16.843 "data_offset": 0, 00:16:16.843 "data_size": 0 00:16:16.843 }, 00:16:16.843 { 00:16:16.843 "name": "BaseBdev2", 00:16:16.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.843 "is_configured": false, 00:16:16.843 "data_offset": 0, 00:16:16.843 "data_size": 0 00:16:16.843 } 00:16:16.843 ] 00:16:16.843 }' 00:16:16.843 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.843 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.413 [2024-11-18 03:16:20.725902] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:17.413 [2024-11-18 03:16:20.725973] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.413 [2024-11-18 03:16:20.737917] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:17.413 [2024-11-18 03:16:20.737970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:17.413 [2024-11-18 03:16:20.737979] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:17.413 [2024-11-18 03:16:20.737989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.413 [2024-11-18 03:16:20.759475] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:17.413 BaseBdev1 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.413 [ 00:16:17.413 { 00:16:17.413 "name": "BaseBdev1", 00:16:17.413 "aliases": [ 00:16:17.413 "7a3dbfce-db46-4b6a-bcad-d3c2aefacd88" 00:16:17.413 ], 00:16:17.413 "product_name": "Malloc disk", 00:16:17.413 "block_size": 4096, 00:16:17.413 "num_blocks": 8192, 00:16:17.413 "uuid": "7a3dbfce-db46-4b6a-bcad-d3c2aefacd88", 00:16:17.413 "md_size": 32, 00:16:17.413 "md_interleave": false, 00:16:17.413 "dif_type": 0, 00:16:17.413 "assigned_rate_limits": { 00:16:17.413 "rw_ios_per_sec": 0, 00:16:17.413 "rw_mbytes_per_sec": 0, 00:16:17.413 "r_mbytes_per_sec": 0, 00:16:17.413 "w_mbytes_per_sec": 0 00:16:17.413 }, 00:16:17.413 "claimed": true, 00:16:17.413 "claim_type": "exclusive_write", 00:16:17.413 "zoned": false, 00:16:17.413 "supported_io_types": { 00:16:17.413 "read": true, 00:16:17.413 "write": true, 00:16:17.413 "unmap": true, 00:16:17.413 "flush": true, 00:16:17.413 "reset": true, 00:16:17.413 "nvme_admin": false, 00:16:17.413 "nvme_io": false, 00:16:17.413 "nvme_io_md": false, 00:16:17.413 "write_zeroes": true, 00:16:17.413 "zcopy": true, 00:16:17.413 "get_zone_info": false, 00:16:17.413 "zone_management": false, 00:16:17.413 "zone_append": false, 00:16:17.413 "compare": false, 00:16:17.413 "compare_and_write": false, 00:16:17.413 "abort": true, 00:16:17.413 "seek_hole": false, 00:16:17.413 "seek_data": false, 00:16:17.413 "copy": true, 00:16:17.413 "nvme_iov_md": false 00:16:17.413 }, 00:16:17.413 "memory_domains": [ 00:16:17.413 { 00:16:17.413 "dma_device_id": "system", 00:16:17.413 "dma_device_type": 1 00:16:17.413 }, 00:16:17.413 { 00:16:17.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.413 "dma_device_type": 2 00:16:17.413 } 00:16:17.413 ], 00:16:17.413 "driver_specific": {} 00:16:17.413 } 00:16:17.413 ] 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.413 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.414 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.414 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.414 "name": "Existed_Raid", 00:16:17.414 "uuid": "8267955e-b2d7-462a-a7cd-d66dd1a01f0c", 00:16:17.414 "strip_size_kb": 0, 00:16:17.414 "state": "configuring", 00:16:17.414 "raid_level": "raid1", 00:16:17.414 "superblock": true, 00:16:17.414 "num_base_bdevs": 2, 00:16:17.414 "num_base_bdevs_discovered": 1, 00:16:17.414 "num_base_bdevs_operational": 2, 00:16:17.414 "base_bdevs_list": [ 00:16:17.414 { 00:16:17.414 "name": "BaseBdev1", 00:16:17.414 "uuid": "7a3dbfce-db46-4b6a-bcad-d3c2aefacd88", 00:16:17.414 "is_configured": true, 00:16:17.414 "data_offset": 256, 00:16:17.414 "data_size": 7936 00:16:17.414 }, 00:16:17.414 { 00:16:17.414 "name": "BaseBdev2", 00:16:17.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.414 "is_configured": false, 00:16:17.414 "data_offset": 0, 00:16:17.414 "data_size": 0 00:16:17.414 } 00:16:17.414 ] 00:16:17.414 }' 00:16:17.414 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.414 03:16:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.673 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:17.673 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.673 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.673 [2024-11-18 03:16:21.242758] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:17.673 [2024-11-18 03:16:21.242831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:16:17.673 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.673 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:17.673 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.933 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.933 [2024-11-18 03:16:21.254805] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:17.933 [2024-11-18 03:16:21.256831] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:17.933 [2024-11-18 03:16:21.256873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:17.933 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.933 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:17.933 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:17.933 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:17.933 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.933 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.933 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:17.933 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:17.933 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:17.933 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.933 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.933 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.933 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.933 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.933 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.933 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.933 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.933 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.933 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.933 "name": "Existed_Raid", 00:16:17.933 "uuid": "4b8823fd-b92d-4fcc-9de8-785a1b099ab0", 00:16:17.933 "strip_size_kb": 0, 00:16:17.933 "state": "configuring", 00:16:17.933 "raid_level": "raid1", 00:16:17.933 "superblock": true, 00:16:17.933 "num_base_bdevs": 2, 00:16:17.933 "num_base_bdevs_discovered": 1, 00:16:17.933 "num_base_bdevs_operational": 2, 00:16:17.933 "base_bdevs_list": [ 00:16:17.933 { 00:16:17.933 "name": "BaseBdev1", 00:16:17.933 "uuid": "7a3dbfce-db46-4b6a-bcad-d3c2aefacd88", 00:16:17.933 "is_configured": true, 00:16:17.933 "data_offset": 256, 00:16:17.933 "data_size": 7936 00:16:17.933 }, 00:16:17.933 { 00:16:17.933 "name": "BaseBdev2", 00:16:17.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.933 "is_configured": false, 00:16:17.933 "data_offset": 0, 00:16:17.933 "data_size": 0 00:16:17.933 } 00:16:17.933 ] 00:16:17.933 }' 00:16:17.933 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.933 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.193 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:16:18.193 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.193 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.193 [2024-11-18 03:16:21.707024] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:18.193 [2024-11-18 03:16:21.707262] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:18.193 [2024-11-18 03:16:21.707283] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:18.193 [2024-11-18 03:16:21.707392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:18.193 [2024-11-18 03:16:21.707523] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:18.193 [2024-11-18 03:16:21.707550] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:16:18.193 [2024-11-18 03:16:21.707645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.193 BaseBdev2 00:16:18.193 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.193 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:18.193 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:18.193 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:18.193 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:16:18.193 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:18.194 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:18.194 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:18.194 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.194 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.194 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.194 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:18.194 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.194 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.194 [ 00:16:18.194 { 00:16:18.194 "name": "BaseBdev2", 00:16:18.194 "aliases": [ 00:16:18.194 "8cdcd237-98ef-4fba-a985-c7677260543d" 00:16:18.194 ], 00:16:18.194 "product_name": "Malloc disk", 00:16:18.194 "block_size": 4096, 00:16:18.194 "num_blocks": 8192, 00:16:18.194 "uuid": "8cdcd237-98ef-4fba-a985-c7677260543d", 00:16:18.194 "md_size": 32, 00:16:18.194 "md_interleave": false, 00:16:18.194 "dif_type": 0, 00:16:18.194 "assigned_rate_limits": { 00:16:18.194 "rw_ios_per_sec": 0, 00:16:18.194 "rw_mbytes_per_sec": 0, 00:16:18.194 "r_mbytes_per_sec": 0, 00:16:18.194 "w_mbytes_per_sec": 0 00:16:18.194 }, 00:16:18.194 "claimed": true, 00:16:18.194 "claim_type": "exclusive_write", 00:16:18.194 "zoned": false, 00:16:18.194 "supported_io_types": { 00:16:18.194 "read": true, 00:16:18.194 "write": true, 00:16:18.194 "unmap": true, 00:16:18.194 "flush": true, 00:16:18.194 "reset": true, 00:16:18.194 "nvme_admin": false, 00:16:18.194 "nvme_io": false, 00:16:18.194 "nvme_io_md": false, 00:16:18.194 "write_zeroes": true, 00:16:18.194 "zcopy": true, 00:16:18.194 "get_zone_info": false, 00:16:18.194 "zone_management": false, 00:16:18.194 "zone_append": false, 00:16:18.194 "compare": false, 00:16:18.194 "compare_and_write": false, 00:16:18.194 "abort": true, 00:16:18.194 "seek_hole": false, 00:16:18.194 "seek_data": false, 00:16:18.194 "copy": true, 00:16:18.194 "nvme_iov_md": false 00:16:18.194 }, 00:16:18.194 "memory_domains": [ 00:16:18.194 { 00:16:18.194 "dma_device_id": "system", 00:16:18.194 "dma_device_type": 1 00:16:18.194 }, 00:16:18.194 { 00:16:18.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.194 "dma_device_type": 2 00:16:18.194 } 00:16:18.194 ], 00:16:18.194 "driver_specific": {} 00:16:18.194 } 00:16:18.194 ] 00:16:18.194 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.194 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:16:18.194 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:18.194 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:18.194 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:18.194 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.194 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.194 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.194 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.194 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:18.194 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.194 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.194 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.194 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.194 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.194 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.194 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.194 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.194 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.453 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.453 "name": "Existed_Raid", 00:16:18.453 "uuid": "4b8823fd-b92d-4fcc-9de8-785a1b099ab0", 00:16:18.453 "strip_size_kb": 0, 00:16:18.453 "state": "online", 00:16:18.453 "raid_level": "raid1", 00:16:18.453 "superblock": true, 00:16:18.453 "num_base_bdevs": 2, 00:16:18.453 "num_base_bdevs_discovered": 2, 00:16:18.453 "num_base_bdevs_operational": 2, 00:16:18.453 "base_bdevs_list": [ 00:16:18.453 { 00:16:18.453 "name": "BaseBdev1", 00:16:18.453 "uuid": "7a3dbfce-db46-4b6a-bcad-d3c2aefacd88", 00:16:18.453 "is_configured": true, 00:16:18.453 "data_offset": 256, 00:16:18.453 "data_size": 7936 00:16:18.453 }, 00:16:18.453 { 00:16:18.453 "name": "BaseBdev2", 00:16:18.453 "uuid": "8cdcd237-98ef-4fba-a985-c7677260543d", 00:16:18.453 "is_configured": true, 00:16:18.453 "data_offset": 256, 00:16:18.453 "data_size": 7936 00:16:18.453 } 00:16:18.453 ] 00:16:18.453 }' 00:16:18.453 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.453 03:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.713 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:18.713 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:18.713 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:18.713 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:18.713 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:18.713 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:18.713 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:18.713 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:18.713 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.713 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.713 [2024-11-18 03:16:22.206594] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:18.713 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.713 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:18.713 "name": "Existed_Raid", 00:16:18.713 "aliases": [ 00:16:18.713 "4b8823fd-b92d-4fcc-9de8-785a1b099ab0" 00:16:18.713 ], 00:16:18.713 "product_name": "Raid Volume", 00:16:18.713 "block_size": 4096, 00:16:18.713 "num_blocks": 7936, 00:16:18.713 "uuid": "4b8823fd-b92d-4fcc-9de8-785a1b099ab0", 00:16:18.713 "md_size": 32, 00:16:18.713 "md_interleave": false, 00:16:18.713 "dif_type": 0, 00:16:18.713 "assigned_rate_limits": { 00:16:18.713 "rw_ios_per_sec": 0, 00:16:18.713 "rw_mbytes_per_sec": 0, 00:16:18.713 "r_mbytes_per_sec": 0, 00:16:18.713 "w_mbytes_per_sec": 0 00:16:18.713 }, 00:16:18.713 "claimed": false, 00:16:18.713 "zoned": false, 00:16:18.713 "supported_io_types": { 00:16:18.713 "read": true, 00:16:18.713 "write": true, 00:16:18.713 "unmap": false, 00:16:18.713 "flush": false, 00:16:18.713 "reset": true, 00:16:18.713 "nvme_admin": false, 00:16:18.713 "nvme_io": false, 00:16:18.713 "nvme_io_md": false, 00:16:18.713 "write_zeroes": true, 00:16:18.713 "zcopy": false, 00:16:18.713 "get_zone_info": false, 00:16:18.713 "zone_management": false, 00:16:18.713 "zone_append": false, 00:16:18.713 "compare": false, 00:16:18.713 "compare_and_write": false, 00:16:18.713 "abort": false, 00:16:18.713 "seek_hole": false, 00:16:18.713 "seek_data": false, 00:16:18.713 "copy": false, 00:16:18.713 "nvme_iov_md": false 00:16:18.713 }, 00:16:18.713 "memory_domains": [ 00:16:18.713 { 00:16:18.713 "dma_device_id": "system", 00:16:18.713 "dma_device_type": 1 00:16:18.713 }, 00:16:18.713 { 00:16:18.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.713 "dma_device_type": 2 00:16:18.713 }, 00:16:18.713 { 00:16:18.713 "dma_device_id": "system", 00:16:18.713 "dma_device_type": 1 00:16:18.713 }, 00:16:18.713 { 00:16:18.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.713 "dma_device_type": 2 00:16:18.713 } 00:16:18.713 ], 00:16:18.713 "driver_specific": { 00:16:18.713 "raid": { 00:16:18.713 "uuid": "4b8823fd-b92d-4fcc-9de8-785a1b099ab0", 00:16:18.713 "strip_size_kb": 0, 00:16:18.713 "state": "online", 00:16:18.713 "raid_level": "raid1", 00:16:18.713 "superblock": true, 00:16:18.713 "num_base_bdevs": 2, 00:16:18.713 "num_base_bdevs_discovered": 2, 00:16:18.713 "num_base_bdevs_operational": 2, 00:16:18.713 "base_bdevs_list": [ 00:16:18.713 { 00:16:18.713 "name": "BaseBdev1", 00:16:18.713 "uuid": "7a3dbfce-db46-4b6a-bcad-d3c2aefacd88", 00:16:18.713 "is_configured": true, 00:16:18.713 "data_offset": 256, 00:16:18.713 "data_size": 7936 00:16:18.713 }, 00:16:18.713 { 00:16:18.713 "name": "BaseBdev2", 00:16:18.713 "uuid": "8cdcd237-98ef-4fba-a985-c7677260543d", 00:16:18.713 "is_configured": true, 00:16:18.713 "data_offset": 256, 00:16:18.713 "data_size": 7936 00:16:18.713 } 00:16:18.713 ] 00:16:18.713 } 00:16:18.713 } 00:16:18.713 }' 00:16:18.713 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:18.713 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:18.713 BaseBdev2' 00:16:18.713 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.973 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:18.973 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.974 [2024-11-18 03:16:22.410006] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.974 "name": "Existed_Raid", 00:16:18.974 "uuid": "4b8823fd-b92d-4fcc-9de8-785a1b099ab0", 00:16:18.974 "strip_size_kb": 0, 00:16:18.974 "state": "online", 00:16:18.974 "raid_level": "raid1", 00:16:18.974 "superblock": true, 00:16:18.974 "num_base_bdevs": 2, 00:16:18.974 "num_base_bdevs_discovered": 1, 00:16:18.974 "num_base_bdevs_operational": 1, 00:16:18.974 "base_bdevs_list": [ 00:16:18.974 { 00:16:18.974 "name": null, 00:16:18.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.974 "is_configured": false, 00:16:18.974 "data_offset": 0, 00:16:18.974 "data_size": 7936 00:16:18.974 }, 00:16:18.974 { 00:16:18.974 "name": "BaseBdev2", 00:16:18.974 "uuid": "8cdcd237-98ef-4fba-a985-c7677260543d", 00:16:18.974 "is_configured": true, 00:16:18.974 "data_offset": 256, 00:16:18.974 "data_size": 7936 00:16:18.974 } 00:16:18.974 ] 00:16:18.974 }' 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.974 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.543 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:19.543 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:19.543 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.543 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:19.543 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.543 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.543 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.543 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:19.544 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:19.544 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:19.544 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.544 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.544 [2024-11-18 03:16:22.953421] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:19.544 [2024-11-18 03:16:22.953534] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:19.544 [2024-11-18 03:16:22.966070] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.544 [2024-11-18 03:16:22.966122] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.544 [2024-11-18 03:16:22.966134] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:16:19.544 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.544 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:19.544 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:19.544 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.544 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.544 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.544 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:19.544 03:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.544 03:16:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:19.544 03:16:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:19.544 03:16:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:19.544 03:16:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 97653 00:16:19.544 03:16:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97653 ']' 00:16:19.544 03:16:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 97653 00:16:19.544 03:16:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:16:19.544 03:16:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:19.544 03:16:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97653 00:16:19.544 03:16:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:19.544 03:16:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:19.544 killing process with pid 97653 00:16:19.544 03:16:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97653' 00:16:19.544 03:16:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 97653 00:16:19.544 [2024-11-18 03:16:23.063854] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:19.544 03:16:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 97653 00:16:19.544 [2024-11-18 03:16:23.064926] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:19.803 03:16:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:16:19.803 00:16:19.803 real 0m3.983s 00:16:19.803 user 0m6.283s 00:16:19.803 sys 0m0.813s 00:16:19.803 03:16:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:19.803 03:16:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.803 ************************************ 00:16:19.803 END TEST raid_state_function_test_sb_md_separate 00:16:19.803 ************************************ 00:16:19.803 03:16:23 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:16:19.803 03:16:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:19.803 03:16:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:19.803 03:16:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:20.062 ************************************ 00:16:20.062 START TEST raid_superblock_test_md_separate 00:16:20.062 ************************************ 00:16:20.062 03:16:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:16:20.062 03:16:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:20.062 03:16:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:20.062 03:16:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:20.062 03:16:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:20.062 03:16:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:20.062 03:16:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:20.062 03:16:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:20.062 03:16:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:20.062 03:16:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:20.062 03:16:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:20.062 03:16:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:20.062 03:16:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:20.062 03:16:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:20.062 03:16:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:20.062 03:16:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:20.062 03:16:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=97894 00:16:20.063 03:16:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:20.063 03:16:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 97894 00:16:20.063 03:16:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97894 ']' 00:16:20.063 03:16:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.063 03:16:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:20.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.063 03:16:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.063 03:16:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:20.063 03:16:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.063 [2024-11-18 03:16:23.464165] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:20.063 [2024-11-18 03:16:23.464298] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97894 ] 00:16:20.063 [2024-11-18 03:16:23.620991] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.321 [2024-11-18 03:16:23.670947] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.321 [2024-11-18 03:16:23.713346] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:20.321 [2024-11-18 03:16:23.713388] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.890 malloc1 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.890 [2024-11-18 03:16:24.328462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:20.890 [2024-11-18 03:16:24.328536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.890 [2024-11-18 03:16:24.328563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:20.890 [2024-11-18 03:16:24.328575] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.890 [2024-11-18 03:16:24.330659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.890 [2024-11-18 03:16:24.330701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:20.890 pt1 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.890 malloc2 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.890 [2024-11-18 03:16:24.368317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:20.890 [2024-11-18 03:16:24.368383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.890 [2024-11-18 03:16:24.368401] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:20.890 [2024-11-18 03:16:24.368412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.890 [2024-11-18 03:16:24.370484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.890 [2024-11-18 03:16:24.370531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:20.890 pt2 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.890 [2024-11-18 03:16:24.380328] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:20.890 [2024-11-18 03:16:24.382324] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:20.890 [2024-11-18 03:16:24.382487] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:20.890 [2024-11-18 03:16:24.382504] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:20.890 [2024-11-18 03:16:24.382600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:20.890 [2024-11-18 03:16:24.382704] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:20.890 [2024-11-18 03:16:24.382731] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:20.890 [2024-11-18 03:16:24.382869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.890 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.890 "name": "raid_bdev1", 00:16:20.890 "uuid": "dc7ef564-8c3e-4992-b291-8f8a0e1232cd", 00:16:20.890 "strip_size_kb": 0, 00:16:20.890 "state": "online", 00:16:20.890 "raid_level": "raid1", 00:16:20.890 "superblock": true, 00:16:20.890 "num_base_bdevs": 2, 00:16:20.890 "num_base_bdevs_discovered": 2, 00:16:20.890 "num_base_bdevs_operational": 2, 00:16:20.890 "base_bdevs_list": [ 00:16:20.890 { 00:16:20.891 "name": "pt1", 00:16:20.891 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:20.891 "is_configured": true, 00:16:20.891 "data_offset": 256, 00:16:20.891 "data_size": 7936 00:16:20.891 }, 00:16:20.891 { 00:16:20.891 "name": "pt2", 00:16:20.891 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:20.891 "is_configured": true, 00:16:20.891 "data_offset": 256, 00:16:20.891 "data_size": 7936 00:16:20.891 } 00:16:20.891 ] 00:16:20.891 }' 00:16:20.891 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.891 03:16:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.459 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:21.459 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:21.459 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:21.459 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:21.459 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:21.459 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:21.459 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:21.459 03:16:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.459 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:21.459 03:16:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.459 [2024-11-18 03:16:24.851895] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:21.459 03:16:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.459 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:21.459 "name": "raid_bdev1", 00:16:21.459 "aliases": [ 00:16:21.459 "dc7ef564-8c3e-4992-b291-8f8a0e1232cd" 00:16:21.459 ], 00:16:21.459 "product_name": "Raid Volume", 00:16:21.459 "block_size": 4096, 00:16:21.459 "num_blocks": 7936, 00:16:21.459 "uuid": "dc7ef564-8c3e-4992-b291-8f8a0e1232cd", 00:16:21.459 "md_size": 32, 00:16:21.459 "md_interleave": false, 00:16:21.459 "dif_type": 0, 00:16:21.459 "assigned_rate_limits": { 00:16:21.459 "rw_ios_per_sec": 0, 00:16:21.459 "rw_mbytes_per_sec": 0, 00:16:21.459 "r_mbytes_per_sec": 0, 00:16:21.459 "w_mbytes_per_sec": 0 00:16:21.459 }, 00:16:21.459 "claimed": false, 00:16:21.459 "zoned": false, 00:16:21.459 "supported_io_types": { 00:16:21.459 "read": true, 00:16:21.459 "write": true, 00:16:21.459 "unmap": false, 00:16:21.459 "flush": false, 00:16:21.459 "reset": true, 00:16:21.459 "nvme_admin": false, 00:16:21.459 "nvme_io": false, 00:16:21.459 "nvme_io_md": false, 00:16:21.459 "write_zeroes": true, 00:16:21.459 "zcopy": false, 00:16:21.459 "get_zone_info": false, 00:16:21.459 "zone_management": false, 00:16:21.459 "zone_append": false, 00:16:21.459 "compare": false, 00:16:21.459 "compare_and_write": false, 00:16:21.459 "abort": false, 00:16:21.459 "seek_hole": false, 00:16:21.459 "seek_data": false, 00:16:21.459 "copy": false, 00:16:21.459 "nvme_iov_md": false 00:16:21.459 }, 00:16:21.459 "memory_domains": [ 00:16:21.459 { 00:16:21.459 "dma_device_id": "system", 00:16:21.459 "dma_device_type": 1 00:16:21.459 }, 00:16:21.459 { 00:16:21.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.459 "dma_device_type": 2 00:16:21.459 }, 00:16:21.459 { 00:16:21.459 "dma_device_id": "system", 00:16:21.459 "dma_device_type": 1 00:16:21.459 }, 00:16:21.459 { 00:16:21.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.459 "dma_device_type": 2 00:16:21.459 } 00:16:21.459 ], 00:16:21.459 "driver_specific": { 00:16:21.459 "raid": { 00:16:21.459 "uuid": "dc7ef564-8c3e-4992-b291-8f8a0e1232cd", 00:16:21.459 "strip_size_kb": 0, 00:16:21.459 "state": "online", 00:16:21.459 "raid_level": "raid1", 00:16:21.459 "superblock": true, 00:16:21.459 "num_base_bdevs": 2, 00:16:21.459 "num_base_bdevs_discovered": 2, 00:16:21.459 "num_base_bdevs_operational": 2, 00:16:21.459 "base_bdevs_list": [ 00:16:21.459 { 00:16:21.459 "name": "pt1", 00:16:21.459 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:21.459 "is_configured": true, 00:16:21.459 "data_offset": 256, 00:16:21.459 "data_size": 7936 00:16:21.459 }, 00:16:21.459 { 00:16:21.459 "name": "pt2", 00:16:21.459 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:21.459 "is_configured": true, 00:16:21.459 "data_offset": 256, 00:16:21.459 "data_size": 7936 00:16:21.459 } 00:16:21.459 ] 00:16:21.459 } 00:16:21.459 } 00:16:21.459 }' 00:16:21.459 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:21.459 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:21.459 pt2' 00:16:21.459 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.459 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:21.459 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.459 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:21.459 03:16:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.459 03:16:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.459 03:16:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.459 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.718 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:21.718 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:21.718 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.718 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:21.718 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.718 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.718 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.718 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.718 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:21.718 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:21.718 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:21.718 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:21.718 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.718 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.718 [2024-11-18 03:16:25.095390] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=dc7ef564-8c3e-4992-b291-8f8a0e1232cd 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z dc7ef564-8c3e-4992-b291-8f8a0e1232cd ']' 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.719 [2024-11-18 03:16:25.127054] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:21.719 [2024-11-18 03:16:25.127089] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:21.719 [2024-11-18 03:16:25.127203] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.719 [2024-11-18 03:16:25.127287] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:21.719 [2024-11-18 03:16:25.127296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.719 [2024-11-18 03:16:25.250860] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:21.719 [2024-11-18 03:16:25.252871] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:21.719 [2024-11-18 03:16:25.252941] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:21.719 [2024-11-18 03:16:25.253005] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:21.719 [2024-11-18 03:16:25.253024] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:21.719 [2024-11-18 03:16:25.253033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:16:21.719 request: 00:16:21.719 { 00:16:21.719 "name": "raid_bdev1", 00:16:21.719 "raid_level": "raid1", 00:16:21.719 "base_bdevs": [ 00:16:21.719 "malloc1", 00:16:21.719 "malloc2" 00:16:21.719 ], 00:16:21.719 "superblock": false, 00:16:21.719 "method": "bdev_raid_create", 00:16:21.719 "req_id": 1 00:16:21.719 } 00:16:21.719 Got JSON-RPC error response 00:16:21.719 response: 00:16:21.719 { 00:16:21.719 "code": -17, 00:16:21.719 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:21.719 } 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.719 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.987 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:21.987 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:21.987 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:21.987 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.987 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.987 [2024-11-18 03:16:25.314680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:21.987 [2024-11-18 03:16:25.314742] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.987 [2024-11-18 03:16:25.314762] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:21.987 [2024-11-18 03:16:25.314771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.987 [2024-11-18 03:16:25.316826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.987 [2024-11-18 03:16:25.316858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:21.987 [2024-11-18 03:16:25.316913] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:21.987 [2024-11-18 03:16:25.316946] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:21.987 pt1 00:16:21.987 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.987 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:21.987 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.987 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.987 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:21.987 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:21.987 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:21.987 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.987 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.987 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.987 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.987 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.987 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.987 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.987 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.987 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.987 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.987 "name": "raid_bdev1", 00:16:21.987 "uuid": "dc7ef564-8c3e-4992-b291-8f8a0e1232cd", 00:16:21.987 "strip_size_kb": 0, 00:16:21.987 "state": "configuring", 00:16:21.987 "raid_level": "raid1", 00:16:21.987 "superblock": true, 00:16:21.987 "num_base_bdevs": 2, 00:16:21.987 "num_base_bdevs_discovered": 1, 00:16:21.987 "num_base_bdevs_operational": 2, 00:16:21.987 "base_bdevs_list": [ 00:16:21.987 { 00:16:21.987 "name": "pt1", 00:16:21.987 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:21.987 "is_configured": true, 00:16:21.987 "data_offset": 256, 00:16:21.987 "data_size": 7936 00:16:21.987 }, 00:16:21.987 { 00:16:21.987 "name": null, 00:16:21.987 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:21.987 "is_configured": false, 00:16:21.987 "data_offset": 256, 00:16:21.987 "data_size": 7936 00:16:21.987 } 00:16:21.987 ] 00:16:21.987 }' 00:16:21.987 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.987 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.260 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:22.260 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:22.260 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:22.260 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:22.260 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.260 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.260 [2024-11-18 03:16:25.781927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:22.260 [2024-11-18 03:16:25.782008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.260 [2024-11-18 03:16:25.782033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:22.260 [2024-11-18 03:16:25.782042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.260 [2024-11-18 03:16:25.782249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.260 [2024-11-18 03:16:25.782264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:22.260 [2024-11-18 03:16:25.782316] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:22.260 [2024-11-18 03:16:25.782334] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:22.260 [2024-11-18 03:16:25.782427] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:22.260 [2024-11-18 03:16:25.782436] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:22.260 [2024-11-18 03:16:25.782504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:22.260 [2024-11-18 03:16:25.782581] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:22.260 [2024-11-18 03:16:25.782594] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:16:22.260 [2024-11-18 03:16:25.782662] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.260 pt2 00:16:22.260 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.260 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:22.260 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:22.260 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:22.260 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.260 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.260 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:22.260 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:22.260 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:22.260 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.260 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.260 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.260 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.260 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.260 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.260 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.260 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.260 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.260 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.260 "name": "raid_bdev1", 00:16:22.260 "uuid": "dc7ef564-8c3e-4992-b291-8f8a0e1232cd", 00:16:22.260 "strip_size_kb": 0, 00:16:22.260 "state": "online", 00:16:22.260 "raid_level": "raid1", 00:16:22.260 "superblock": true, 00:16:22.260 "num_base_bdevs": 2, 00:16:22.260 "num_base_bdevs_discovered": 2, 00:16:22.260 "num_base_bdevs_operational": 2, 00:16:22.260 "base_bdevs_list": [ 00:16:22.260 { 00:16:22.260 "name": "pt1", 00:16:22.260 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:22.260 "is_configured": true, 00:16:22.260 "data_offset": 256, 00:16:22.260 "data_size": 7936 00:16:22.260 }, 00:16:22.260 { 00:16:22.260 "name": "pt2", 00:16:22.260 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:22.260 "is_configured": true, 00:16:22.260 "data_offset": 256, 00:16:22.260 "data_size": 7936 00:16:22.260 } 00:16:22.260 ] 00:16:22.260 }' 00:16:22.260 03:16:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.261 03:16:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.828 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:22.828 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:22.828 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:22.828 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:22.828 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:22.828 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:22.828 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:22.828 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:22.828 03:16:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.828 03:16:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.828 [2024-11-18 03:16:26.241410] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:22.828 03:16:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.828 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:22.828 "name": "raid_bdev1", 00:16:22.828 "aliases": [ 00:16:22.828 "dc7ef564-8c3e-4992-b291-8f8a0e1232cd" 00:16:22.828 ], 00:16:22.828 "product_name": "Raid Volume", 00:16:22.828 "block_size": 4096, 00:16:22.828 "num_blocks": 7936, 00:16:22.828 "uuid": "dc7ef564-8c3e-4992-b291-8f8a0e1232cd", 00:16:22.828 "md_size": 32, 00:16:22.828 "md_interleave": false, 00:16:22.828 "dif_type": 0, 00:16:22.828 "assigned_rate_limits": { 00:16:22.828 "rw_ios_per_sec": 0, 00:16:22.828 "rw_mbytes_per_sec": 0, 00:16:22.828 "r_mbytes_per_sec": 0, 00:16:22.828 "w_mbytes_per_sec": 0 00:16:22.828 }, 00:16:22.828 "claimed": false, 00:16:22.828 "zoned": false, 00:16:22.828 "supported_io_types": { 00:16:22.828 "read": true, 00:16:22.828 "write": true, 00:16:22.828 "unmap": false, 00:16:22.828 "flush": false, 00:16:22.828 "reset": true, 00:16:22.828 "nvme_admin": false, 00:16:22.828 "nvme_io": false, 00:16:22.828 "nvme_io_md": false, 00:16:22.828 "write_zeroes": true, 00:16:22.828 "zcopy": false, 00:16:22.828 "get_zone_info": false, 00:16:22.828 "zone_management": false, 00:16:22.828 "zone_append": false, 00:16:22.828 "compare": false, 00:16:22.828 "compare_and_write": false, 00:16:22.828 "abort": false, 00:16:22.828 "seek_hole": false, 00:16:22.828 "seek_data": false, 00:16:22.828 "copy": false, 00:16:22.829 "nvme_iov_md": false 00:16:22.829 }, 00:16:22.829 "memory_domains": [ 00:16:22.829 { 00:16:22.829 "dma_device_id": "system", 00:16:22.829 "dma_device_type": 1 00:16:22.829 }, 00:16:22.829 { 00:16:22.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.829 "dma_device_type": 2 00:16:22.829 }, 00:16:22.829 { 00:16:22.829 "dma_device_id": "system", 00:16:22.829 "dma_device_type": 1 00:16:22.829 }, 00:16:22.829 { 00:16:22.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.829 "dma_device_type": 2 00:16:22.829 } 00:16:22.829 ], 00:16:22.829 "driver_specific": { 00:16:22.829 "raid": { 00:16:22.829 "uuid": "dc7ef564-8c3e-4992-b291-8f8a0e1232cd", 00:16:22.829 "strip_size_kb": 0, 00:16:22.829 "state": "online", 00:16:22.829 "raid_level": "raid1", 00:16:22.829 "superblock": true, 00:16:22.829 "num_base_bdevs": 2, 00:16:22.829 "num_base_bdevs_discovered": 2, 00:16:22.829 "num_base_bdevs_operational": 2, 00:16:22.829 "base_bdevs_list": [ 00:16:22.829 { 00:16:22.829 "name": "pt1", 00:16:22.829 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:22.829 "is_configured": true, 00:16:22.829 "data_offset": 256, 00:16:22.829 "data_size": 7936 00:16:22.829 }, 00:16:22.829 { 00:16:22.829 "name": "pt2", 00:16:22.829 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:22.829 "is_configured": true, 00:16:22.829 "data_offset": 256, 00:16:22.829 "data_size": 7936 00:16:22.829 } 00:16:22.829 ] 00:16:22.829 } 00:16:22.829 } 00:16:22.829 }' 00:16:22.829 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:22.829 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:22.829 pt2' 00:16:22.829 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:22.829 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:22.829 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:22.829 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:22.829 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:22.829 03:16:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.829 03:16:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.829 03:16:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.829 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:22.829 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:22.829 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:22.829 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:22.829 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:22.829 03:16:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.829 03:16:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.088 03:16:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.088 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:23.088 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:23.088 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:23.088 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:23.088 03:16:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.088 03:16:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.088 [2024-11-18 03:16:26.449058] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:23.088 03:16:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.088 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' dc7ef564-8c3e-4992-b291-8f8a0e1232cd '!=' dc7ef564-8c3e-4992-b291-8f8a0e1232cd ']' 00:16:23.088 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:23.088 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:23.088 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:16:23.088 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:23.088 03:16:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.088 03:16:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.088 [2024-11-18 03:16:26.496728] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:23.088 03:16:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.088 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:23.088 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.088 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.088 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.088 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.088 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:23.089 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.089 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.089 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.089 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.089 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.089 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.089 03:16:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.089 03:16:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.089 03:16:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.089 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.089 "name": "raid_bdev1", 00:16:23.089 "uuid": "dc7ef564-8c3e-4992-b291-8f8a0e1232cd", 00:16:23.089 "strip_size_kb": 0, 00:16:23.089 "state": "online", 00:16:23.089 "raid_level": "raid1", 00:16:23.089 "superblock": true, 00:16:23.089 "num_base_bdevs": 2, 00:16:23.089 "num_base_bdevs_discovered": 1, 00:16:23.089 "num_base_bdevs_operational": 1, 00:16:23.089 "base_bdevs_list": [ 00:16:23.089 { 00:16:23.089 "name": null, 00:16:23.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.089 "is_configured": false, 00:16:23.089 "data_offset": 0, 00:16:23.089 "data_size": 7936 00:16:23.089 }, 00:16:23.089 { 00:16:23.089 "name": "pt2", 00:16:23.089 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:23.089 "is_configured": true, 00:16:23.089 "data_offset": 256, 00:16:23.089 "data_size": 7936 00:16:23.089 } 00:16:23.089 ] 00:16:23.089 }' 00:16:23.089 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.089 03:16:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.657 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:23.657 03:16:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.657 03:16:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.657 [2024-11-18 03:16:26.951899] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:23.657 [2024-11-18 03:16:26.951936] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:23.657 [2024-11-18 03:16:26.952022] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:23.657 [2024-11-18 03:16:26.952075] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:23.657 [2024-11-18 03:16:26.952084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:16:23.657 03:16:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.657 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.657 03:16:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.657 03:16:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:23.657 03:16:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.657 03:16:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.657 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.658 [2024-11-18 03:16:27.031778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:23.658 [2024-11-18 03:16:27.031844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.658 [2024-11-18 03:16:27.031865] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:23.658 [2024-11-18 03:16:27.031875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.658 [2024-11-18 03:16:27.034330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.658 [2024-11-18 03:16:27.034372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:23.658 [2024-11-18 03:16:27.034440] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:23.658 [2024-11-18 03:16:27.034475] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:23.658 [2024-11-18 03:16:27.034551] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:16:23.658 [2024-11-18 03:16:27.034561] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:23.658 [2024-11-18 03:16:27.034648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:23.658 [2024-11-18 03:16:27.034752] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:16:23.658 [2024-11-18 03:16:27.034765] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:16:23.658 [2024-11-18 03:16:27.034845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.658 pt2 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.658 "name": "raid_bdev1", 00:16:23.658 "uuid": "dc7ef564-8c3e-4992-b291-8f8a0e1232cd", 00:16:23.658 "strip_size_kb": 0, 00:16:23.658 "state": "online", 00:16:23.658 "raid_level": "raid1", 00:16:23.658 "superblock": true, 00:16:23.658 "num_base_bdevs": 2, 00:16:23.658 "num_base_bdevs_discovered": 1, 00:16:23.658 "num_base_bdevs_operational": 1, 00:16:23.658 "base_bdevs_list": [ 00:16:23.658 { 00:16:23.658 "name": null, 00:16:23.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.658 "is_configured": false, 00:16:23.658 "data_offset": 256, 00:16:23.658 "data_size": 7936 00:16:23.658 }, 00:16:23.658 { 00:16:23.658 "name": "pt2", 00:16:23.658 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:23.658 "is_configured": true, 00:16:23.658 "data_offset": 256, 00:16:23.658 "data_size": 7936 00:16:23.658 } 00:16:23.658 ] 00:16:23.658 }' 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.658 03:16:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.227 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:24.227 03:16:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.227 03:16:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.227 [2024-11-18 03:16:27.550900] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:24.227 [2024-11-18 03:16:27.550934] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:24.227 [2024-11-18 03:16:27.551029] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:24.227 [2024-11-18 03:16:27.551077] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:24.227 [2024-11-18 03:16:27.551088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:16:24.227 03:16:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.227 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.227 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:24.227 03:16:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.227 03:16:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.227 03:16:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.227 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:24.227 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:24.227 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:24.227 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:24.227 03:16:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.227 03:16:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.227 [2024-11-18 03:16:27.610797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:24.227 [2024-11-18 03:16:27.610857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.228 [2024-11-18 03:16:27.610878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:24.228 [2024-11-18 03:16:27.610890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.228 [2024-11-18 03:16:27.613055] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.228 [2024-11-18 03:16:27.613091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:24.228 [2024-11-18 03:16:27.613148] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:24.228 [2024-11-18 03:16:27.613200] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:24.228 [2024-11-18 03:16:27.613309] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:24.228 [2024-11-18 03:16:27.613322] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:24.228 [2024-11-18 03:16:27.613350] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:16:24.228 [2024-11-18 03:16:27.613392] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:24.228 [2024-11-18 03:16:27.613473] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:24.228 [2024-11-18 03:16:27.613492] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:24.228 [2024-11-18 03:16:27.613570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:24.228 [2024-11-18 03:16:27.613652] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:24.228 [2024-11-18 03:16:27.613661] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:24.228 [2024-11-18 03:16:27.613746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.228 pt1 00:16:24.228 03:16:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.228 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:24.228 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:24.228 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.228 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.228 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:24.228 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:24.228 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:24.228 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.228 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.228 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.228 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.228 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.228 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.228 03:16:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.228 03:16:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.228 03:16:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.228 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.228 "name": "raid_bdev1", 00:16:24.228 "uuid": "dc7ef564-8c3e-4992-b291-8f8a0e1232cd", 00:16:24.228 "strip_size_kb": 0, 00:16:24.228 "state": "online", 00:16:24.228 "raid_level": "raid1", 00:16:24.228 "superblock": true, 00:16:24.228 "num_base_bdevs": 2, 00:16:24.228 "num_base_bdevs_discovered": 1, 00:16:24.228 "num_base_bdevs_operational": 1, 00:16:24.228 "base_bdevs_list": [ 00:16:24.228 { 00:16:24.228 "name": null, 00:16:24.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.228 "is_configured": false, 00:16:24.228 "data_offset": 256, 00:16:24.228 "data_size": 7936 00:16:24.228 }, 00:16:24.228 { 00:16:24.228 "name": "pt2", 00:16:24.228 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:24.228 "is_configured": true, 00:16:24.228 "data_offset": 256, 00:16:24.228 "data_size": 7936 00:16:24.228 } 00:16:24.228 ] 00:16:24.228 }' 00:16:24.228 03:16:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.228 03:16:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.487 03:16:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:24.487 03:16:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:24.487 03:16:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.487 03:16:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.747 03:16:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.747 03:16:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:24.747 03:16:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:24.747 03:16:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:24.747 03:16:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.747 03:16:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.747 [2024-11-18 03:16:28.090248] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:24.747 03:16:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.747 03:16:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' dc7ef564-8c3e-4992-b291-8f8a0e1232cd '!=' dc7ef564-8c3e-4992-b291-8f8a0e1232cd ']' 00:16:24.747 03:16:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 97894 00:16:24.747 03:16:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97894 ']' 00:16:24.747 03:16:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 97894 00:16:24.747 03:16:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:16:24.747 03:16:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:24.747 03:16:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97894 00:16:24.747 03:16:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:24.747 03:16:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:24.747 killing process with pid 97894 00:16:24.747 03:16:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97894' 00:16:24.747 03:16:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 97894 00:16:24.747 [2024-11-18 03:16:28.155106] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:24.747 [2024-11-18 03:16:28.155216] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:24.747 [2024-11-18 03:16:28.155274] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:24.747 03:16:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 97894 00:16:24.747 [2024-11-18 03:16:28.155284] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:24.747 [2024-11-18 03:16:28.179970] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:25.006 03:16:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:16:25.006 00:16:25.006 real 0m5.045s 00:16:25.006 user 0m8.276s 00:16:25.006 sys 0m1.099s 00:16:25.006 03:16:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:25.006 03:16:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.007 ************************************ 00:16:25.007 END TEST raid_superblock_test_md_separate 00:16:25.007 ************************************ 00:16:25.007 03:16:28 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:16:25.007 03:16:28 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:16:25.007 03:16:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:25.007 03:16:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:25.007 03:16:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:25.007 ************************************ 00:16:25.007 START TEST raid_rebuild_test_sb_md_separate 00:16:25.007 ************************************ 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=98205 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 98205 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 98205 ']' 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:25.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:25.007 03:16:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.265 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:25.265 Zero copy mechanism will not be used. 00:16:25.265 [2024-11-18 03:16:28.589732] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:25.265 [2024-11-18 03:16:28.589869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98205 ] 00:16:25.265 [2024-11-18 03:16:28.752277] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.265 [2024-11-18 03:16:28.802091] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.524 [2024-11-18 03:16:28.844109] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:25.524 [2024-11-18 03:16:28.844167] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.099 BaseBdev1_malloc 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.099 [2024-11-18 03:16:29.446897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:26.099 [2024-11-18 03:16:29.446954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.099 [2024-11-18 03:16:29.446986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:26.099 [2024-11-18 03:16:29.447003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.099 [2024-11-18 03:16:29.448952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.099 [2024-11-18 03:16:29.449003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:26.099 BaseBdev1 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.099 BaseBdev2_malloc 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.099 [2024-11-18 03:16:29.486178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:26.099 [2024-11-18 03:16:29.486237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.099 [2024-11-18 03:16:29.486259] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:26.099 [2024-11-18 03:16:29.486267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.099 [2024-11-18 03:16:29.488216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.099 [2024-11-18 03:16:29.488248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:26.099 BaseBdev2 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.099 spare_malloc 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.099 spare_delay 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.099 [2024-11-18 03:16:29.527437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:26.099 [2024-11-18 03:16:29.527499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.099 [2024-11-18 03:16:29.527525] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:26.099 [2024-11-18 03:16:29.527536] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.099 [2024-11-18 03:16:29.529579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.099 [2024-11-18 03:16:29.529615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:26.099 spare 00:16:26.099 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.100 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:26.100 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.100 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.100 [2024-11-18 03:16:29.539460] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:26.100 [2024-11-18 03:16:29.541589] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:26.100 [2024-11-18 03:16:29.541768] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:26.100 [2024-11-18 03:16:29.541787] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:26.100 [2024-11-18 03:16:29.541885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:26.100 [2024-11-18 03:16:29.542017] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:26.100 [2024-11-18 03:16:29.542047] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:26.100 [2024-11-18 03:16:29.542152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.100 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.100 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:26.100 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.100 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.100 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.100 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.100 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:26.100 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.100 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.100 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.100 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.100 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.100 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.100 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.100 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.100 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.100 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.100 "name": "raid_bdev1", 00:16:26.100 "uuid": "74f064b6-df90-477e-86d6-45b66d632098", 00:16:26.100 "strip_size_kb": 0, 00:16:26.100 "state": "online", 00:16:26.100 "raid_level": "raid1", 00:16:26.100 "superblock": true, 00:16:26.100 "num_base_bdevs": 2, 00:16:26.100 "num_base_bdevs_discovered": 2, 00:16:26.100 "num_base_bdevs_operational": 2, 00:16:26.100 "base_bdevs_list": [ 00:16:26.100 { 00:16:26.100 "name": "BaseBdev1", 00:16:26.100 "uuid": "9a22ef48-d824-5111-a4a2-baa9402ac12f", 00:16:26.100 "is_configured": true, 00:16:26.100 "data_offset": 256, 00:16:26.100 "data_size": 7936 00:16:26.100 }, 00:16:26.100 { 00:16:26.100 "name": "BaseBdev2", 00:16:26.100 "uuid": "9fd4fefc-a5c0-5e31-8b50-460677fd594d", 00:16:26.100 "is_configured": true, 00:16:26.100 "data_offset": 256, 00:16:26.100 "data_size": 7936 00:16:26.100 } 00:16:26.100 ] 00:16:26.100 }' 00:16:26.100 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.100 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.668 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:26.668 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:26.668 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.668 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.668 [2024-11-18 03:16:29.971051] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:26.668 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.668 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:26.668 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.668 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.668 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.668 03:16:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:26.668 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.668 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:26.668 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:26.668 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:26.668 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:26.668 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:26.668 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:26.668 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:26.668 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:26.668 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:26.668 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:26.668 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:26.669 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:26.669 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:26.669 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:26.669 [2024-11-18 03:16:30.238342] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:26.928 /dev/nbd0 00:16:26.928 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:26.928 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:26.928 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:26.928 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:16:26.928 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:26.928 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:26.928 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:26.928 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:16:26.928 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:26.928 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:26.928 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:26.928 1+0 records in 00:16:26.928 1+0 records out 00:16:26.928 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385347 s, 10.6 MB/s 00:16:26.928 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:26.928 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:16:26.928 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:26.928 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:26.928 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:16:26.928 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:26.928 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:26.928 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:26.928 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:26.928 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:27.496 7936+0 records in 00:16:27.496 7936+0 records out 00:16:27.496 32505856 bytes (33 MB, 31 MiB) copied, 0.58213 s, 55.8 MB/s 00:16:27.496 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:27.496 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:27.496 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:27.496 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:27.496 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:27.496 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:27.496 03:16:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:27.755 [2024-11-18 03:16:31.083441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.755 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:27.755 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:27.755 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:27.755 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:27.755 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:27.755 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:27.755 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:27.755 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:27.755 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:27.755 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.755 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.755 [2024-11-18 03:16:31.120145] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:27.755 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.755 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:27.755 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.755 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.755 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:27.755 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:27.755 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:27.755 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.755 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.755 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.755 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.755 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.755 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.755 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.755 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.755 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.755 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.755 "name": "raid_bdev1", 00:16:27.755 "uuid": "74f064b6-df90-477e-86d6-45b66d632098", 00:16:27.755 "strip_size_kb": 0, 00:16:27.755 "state": "online", 00:16:27.755 "raid_level": "raid1", 00:16:27.755 "superblock": true, 00:16:27.755 "num_base_bdevs": 2, 00:16:27.755 "num_base_bdevs_discovered": 1, 00:16:27.755 "num_base_bdevs_operational": 1, 00:16:27.755 "base_bdevs_list": [ 00:16:27.755 { 00:16:27.755 "name": null, 00:16:27.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.756 "is_configured": false, 00:16:27.756 "data_offset": 0, 00:16:27.756 "data_size": 7936 00:16:27.756 }, 00:16:27.756 { 00:16:27.756 "name": "BaseBdev2", 00:16:27.756 "uuid": "9fd4fefc-a5c0-5e31-8b50-460677fd594d", 00:16:27.756 "is_configured": true, 00:16:27.756 "data_offset": 256, 00:16:27.756 "data_size": 7936 00:16:27.756 } 00:16:27.756 ] 00:16:27.756 }' 00:16:27.756 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.756 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.014 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:28.014 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.014 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.014 [2024-11-18 03:16:31.571385] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:28.014 [2024-11-18 03:16:31.573198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:16:28.014 [2024-11-18 03:16:31.575115] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:28.014 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.014 03:16:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:29.392 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.392 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.392 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.392 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.392 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.392 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.392 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.392 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.392 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.392 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.392 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.392 "name": "raid_bdev1", 00:16:29.392 "uuid": "74f064b6-df90-477e-86d6-45b66d632098", 00:16:29.392 "strip_size_kb": 0, 00:16:29.392 "state": "online", 00:16:29.392 "raid_level": "raid1", 00:16:29.392 "superblock": true, 00:16:29.392 "num_base_bdevs": 2, 00:16:29.392 "num_base_bdevs_discovered": 2, 00:16:29.392 "num_base_bdevs_operational": 2, 00:16:29.392 "process": { 00:16:29.392 "type": "rebuild", 00:16:29.392 "target": "spare", 00:16:29.392 "progress": { 00:16:29.392 "blocks": 2560, 00:16:29.392 "percent": 32 00:16:29.392 } 00:16:29.392 }, 00:16:29.392 "base_bdevs_list": [ 00:16:29.392 { 00:16:29.392 "name": "spare", 00:16:29.392 "uuid": "f5df98fb-8bf3-56e0-af06-c1e9b54029ef", 00:16:29.392 "is_configured": true, 00:16:29.392 "data_offset": 256, 00:16:29.392 "data_size": 7936 00:16:29.392 }, 00:16:29.392 { 00:16:29.392 "name": "BaseBdev2", 00:16:29.392 "uuid": "9fd4fefc-a5c0-5e31-8b50-460677fd594d", 00:16:29.392 "is_configured": true, 00:16:29.392 "data_offset": 256, 00:16:29.392 "data_size": 7936 00:16:29.392 } 00:16:29.392 ] 00:16:29.392 }' 00:16:29.392 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.392 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.392 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.392 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.392 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:29.392 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.393 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.393 [2024-11-18 03:16:32.730298] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:29.393 [2024-11-18 03:16:32.780576] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:29.393 [2024-11-18 03:16:32.780641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.393 [2024-11-18 03:16:32.780658] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:29.393 [2024-11-18 03:16:32.780665] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:29.393 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.393 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:29.393 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.393 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.393 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:29.393 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:29.393 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:29.393 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.393 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.393 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.393 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.393 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.393 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.393 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.393 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.393 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.393 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.393 "name": "raid_bdev1", 00:16:29.393 "uuid": "74f064b6-df90-477e-86d6-45b66d632098", 00:16:29.393 "strip_size_kb": 0, 00:16:29.393 "state": "online", 00:16:29.393 "raid_level": "raid1", 00:16:29.393 "superblock": true, 00:16:29.393 "num_base_bdevs": 2, 00:16:29.393 "num_base_bdevs_discovered": 1, 00:16:29.393 "num_base_bdevs_operational": 1, 00:16:29.393 "base_bdevs_list": [ 00:16:29.393 { 00:16:29.393 "name": null, 00:16:29.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.393 "is_configured": false, 00:16:29.393 "data_offset": 0, 00:16:29.393 "data_size": 7936 00:16:29.393 }, 00:16:29.393 { 00:16:29.393 "name": "BaseBdev2", 00:16:29.393 "uuid": "9fd4fefc-a5c0-5e31-8b50-460677fd594d", 00:16:29.393 "is_configured": true, 00:16:29.393 "data_offset": 256, 00:16:29.393 "data_size": 7936 00:16:29.393 } 00:16:29.393 ] 00:16:29.393 }' 00:16:29.393 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.393 03:16:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.961 03:16:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:29.961 03:16:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.961 03:16:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:29.961 03:16:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:29.961 03:16:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.961 03:16:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.961 03:16:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.961 03:16:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.961 03:16:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.961 03:16:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.961 03:16:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.961 "name": "raid_bdev1", 00:16:29.961 "uuid": "74f064b6-df90-477e-86d6-45b66d632098", 00:16:29.961 "strip_size_kb": 0, 00:16:29.961 "state": "online", 00:16:29.961 "raid_level": "raid1", 00:16:29.961 "superblock": true, 00:16:29.961 "num_base_bdevs": 2, 00:16:29.961 "num_base_bdevs_discovered": 1, 00:16:29.961 "num_base_bdevs_operational": 1, 00:16:29.961 "base_bdevs_list": [ 00:16:29.961 { 00:16:29.961 "name": null, 00:16:29.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.961 "is_configured": false, 00:16:29.961 "data_offset": 0, 00:16:29.961 "data_size": 7936 00:16:29.961 }, 00:16:29.961 { 00:16:29.961 "name": "BaseBdev2", 00:16:29.961 "uuid": "9fd4fefc-a5c0-5e31-8b50-460677fd594d", 00:16:29.961 "is_configured": true, 00:16:29.961 "data_offset": 256, 00:16:29.961 "data_size": 7936 00:16:29.961 } 00:16:29.961 ] 00:16:29.961 }' 00:16:29.961 03:16:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.961 03:16:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:29.961 03:16:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.961 03:16:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:29.961 03:16:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:29.961 03:16:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.961 03:16:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.961 [2024-11-18 03:16:33.382880] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:29.961 [2024-11-18 03:16:33.384719] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:16:29.961 [2024-11-18 03:16:33.386754] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:29.961 03:16:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.961 03:16:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:30.897 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:30.897 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.897 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:30.897 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:30.897 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.897 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.897 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.897 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.897 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.897 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.897 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.897 "name": "raid_bdev1", 00:16:30.897 "uuid": "74f064b6-df90-477e-86d6-45b66d632098", 00:16:30.897 "strip_size_kb": 0, 00:16:30.897 "state": "online", 00:16:30.897 "raid_level": "raid1", 00:16:30.897 "superblock": true, 00:16:30.897 "num_base_bdevs": 2, 00:16:30.897 "num_base_bdevs_discovered": 2, 00:16:30.897 "num_base_bdevs_operational": 2, 00:16:30.897 "process": { 00:16:30.897 "type": "rebuild", 00:16:30.897 "target": "spare", 00:16:30.897 "progress": { 00:16:30.897 "blocks": 2560, 00:16:30.897 "percent": 32 00:16:30.897 } 00:16:30.897 }, 00:16:30.897 "base_bdevs_list": [ 00:16:30.897 { 00:16:30.897 "name": "spare", 00:16:30.897 "uuid": "f5df98fb-8bf3-56e0-af06-c1e9b54029ef", 00:16:30.897 "is_configured": true, 00:16:30.897 "data_offset": 256, 00:16:30.897 "data_size": 7936 00:16:30.897 }, 00:16:30.897 { 00:16:30.897 "name": "BaseBdev2", 00:16:30.897 "uuid": "9fd4fefc-a5c0-5e31-8b50-460677fd594d", 00:16:30.897 "is_configured": true, 00:16:30.897 "data_offset": 256, 00:16:30.897 "data_size": 7936 00:16:30.897 } 00:16:30.897 ] 00:16:30.897 }' 00:16:30.897 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.156 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.156 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.156 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.156 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:31.156 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:31.156 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:31.156 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:31.156 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:31.156 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:31.156 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=588 00:16:31.156 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:31.156 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.156 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.156 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.156 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.156 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.156 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.156 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.156 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.156 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.156 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.156 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.156 "name": "raid_bdev1", 00:16:31.156 "uuid": "74f064b6-df90-477e-86d6-45b66d632098", 00:16:31.156 "strip_size_kb": 0, 00:16:31.156 "state": "online", 00:16:31.156 "raid_level": "raid1", 00:16:31.156 "superblock": true, 00:16:31.156 "num_base_bdevs": 2, 00:16:31.156 "num_base_bdevs_discovered": 2, 00:16:31.156 "num_base_bdevs_operational": 2, 00:16:31.156 "process": { 00:16:31.156 "type": "rebuild", 00:16:31.156 "target": "spare", 00:16:31.156 "progress": { 00:16:31.156 "blocks": 2816, 00:16:31.156 "percent": 35 00:16:31.156 } 00:16:31.156 }, 00:16:31.156 "base_bdevs_list": [ 00:16:31.156 { 00:16:31.156 "name": "spare", 00:16:31.156 "uuid": "f5df98fb-8bf3-56e0-af06-c1e9b54029ef", 00:16:31.156 "is_configured": true, 00:16:31.156 "data_offset": 256, 00:16:31.156 "data_size": 7936 00:16:31.156 }, 00:16:31.156 { 00:16:31.156 "name": "BaseBdev2", 00:16:31.156 "uuid": "9fd4fefc-a5c0-5e31-8b50-460677fd594d", 00:16:31.156 "is_configured": true, 00:16:31.156 "data_offset": 256, 00:16:31.156 "data_size": 7936 00:16:31.156 } 00:16:31.156 ] 00:16:31.156 }' 00:16:31.156 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.156 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.156 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.156 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.156 03:16:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:32.092 03:16:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:32.092 03:16:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.092 03:16:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.092 03:16:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.092 03:16:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.092 03:16:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.092 03:16:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.092 03:16:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.092 03:16:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.092 03:16:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.351 03:16:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.351 03:16:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.351 "name": "raid_bdev1", 00:16:32.351 "uuid": "74f064b6-df90-477e-86d6-45b66d632098", 00:16:32.351 "strip_size_kb": 0, 00:16:32.351 "state": "online", 00:16:32.351 "raid_level": "raid1", 00:16:32.351 "superblock": true, 00:16:32.351 "num_base_bdevs": 2, 00:16:32.351 "num_base_bdevs_discovered": 2, 00:16:32.351 "num_base_bdevs_operational": 2, 00:16:32.351 "process": { 00:16:32.351 "type": "rebuild", 00:16:32.351 "target": "spare", 00:16:32.351 "progress": { 00:16:32.351 "blocks": 5632, 00:16:32.351 "percent": 70 00:16:32.351 } 00:16:32.351 }, 00:16:32.351 "base_bdevs_list": [ 00:16:32.351 { 00:16:32.351 "name": "spare", 00:16:32.351 "uuid": "f5df98fb-8bf3-56e0-af06-c1e9b54029ef", 00:16:32.351 "is_configured": true, 00:16:32.351 "data_offset": 256, 00:16:32.351 "data_size": 7936 00:16:32.351 }, 00:16:32.351 { 00:16:32.351 "name": "BaseBdev2", 00:16:32.351 "uuid": "9fd4fefc-a5c0-5e31-8b50-460677fd594d", 00:16:32.351 "is_configured": true, 00:16:32.351 "data_offset": 256, 00:16:32.351 "data_size": 7936 00:16:32.351 } 00:16:32.351 ] 00:16:32.351 }' 00:16:32.351 03:16:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.351 03:16:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:32.351 03:16:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.351 03:16:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.351 03:16:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:33.288 [2024-11-18 03:16:36.499342] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:33.288 [2024-11-18 03:16:36.499514] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:33.288 [2024-11-18 03:16:36.499662] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.288 03:16:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:33.288 03:16:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.288 03:16:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.288 03:16:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.288 03:16:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.288 03:16:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.288 03:16:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.288 03:16:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.288 03:16:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.288 03:16:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.288 03:16:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.288 03:16:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.288 "name": "raid_bdev1", 00:16:33.288 "uuid": "74f064b6-df90-477e-86d6-45b66d632098", 00:16:33.288 "strip_size_kb": 0, 00:16:33.288 "state": "online", 00:16:33.289 "raid_level": "raid1", 00:16:33.289 "superblock": true, 00:16:33.289 "num_base_bdevs": 2, 00:16:33.289 "num_base_bdevs_discovered": 2, 00:16:33.289 "num_base_bdevs_operational": 2, 00:16:33.289 "base_bdevs_list": [ 00:16:33.289 { 00:16:33.289 "name": "spare", 00:16:33.289 "uuid": "f5df98fb-8bf3-56e0-af06-c1e9b54029ef", 00:16:33.289 "is_configured": true, 00:16:33.289 "data_offset": 256, 00:16:33.289 "data_size": 7936 00:16:33.289 }, 00:16:33.289 { 00:16:33.289 "name": "BaseBdev2", 00:16:33.289 "uuid": "9fd4fefc-a5c0-5e31-8b50-460677fd594d", 00:16:33.289 "is_configured": true, 00:16:33.289 "data_offset": 256, 00:16:33.289 "data_size": 7936 00:16:33.289 } 00:16:33.289 ] 00:16:33.289 }' 00:16:33.289 03:16:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.548 03:16:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:33.548 03:16:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.549 03:16:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:33.549 03:16:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:16:33.549 03:16:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:33.549 03:16:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.549 03:16:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:33.549 03:16:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:33.549 03:16:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.549 03:16:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.549 03:16:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.549 03:16:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.549 03:16:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.549 03:16:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.549 03:16:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.549 "name": "raid_bdev1", 00:16:33.549 "uuid": "74f064b6-df90-477e-86d6-45b66d632098", 00:16:33.549 "strip_size_kb": 0, 00:16:33.549 "state": "online", 00:16:33.549 "raid_level": "raid1", 00:16:33.549 "superblock": true, 00:16:33.549 "num_base_bdevs": 2, 00:16:33.549 "num_base_bdevs_discovered": 2, 00:16:33.549 "num_base_bdevs_operational": 2, 00:16:33.549 "base_bdevs_list": [ 00:16:33.549 { 00:16:33.549 "name": "spare", 00:16:33.549 "uuid": "f5df98fb-8bf3-56e0-af06-c1e9b54029ef", 00:16:33.549 "is_configured": true, 00:16:33.549 "data_offset": 256, 00:16:33.549 "data_size": 7936 00:16:33.549 }, 00:16:33.549 { 00:16:33.549 "name": "BaseBdev2", 00:16:33.549 "uuid": "9fd4fefc-a5c0-5e31-8b50-460677fd594d", 00:16:33.549 "is_configured": true, 00:16:33.549 "data_offset": 256, 00:16:33.549 "data_size": 7936 00:16:33.549 } 00:16:33.549 ] 00:16:33.549 }' 00:16:33.549 03:16:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.549 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:33.549 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.549 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:33.549 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:33.549 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.549 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.549 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:33.549 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:33.549 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:33.549 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.549 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.549 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.549 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.549 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.549 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.549 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.549 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.549 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.549 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.549 "name": "raid_bdev1", 00:16:33.549 "uuid": "74f064b6-df90-477e-86d6-45b66d632098", 00:16:33.549 "strip_size_kb": 0, 00:16:33.549 "state": "online", 00:16:33.549 "raid_level": "raid1", 00:16:33.549 "superblock": true, 00:16:33.549 "num_base_bdevs": 2, 00:16:33.549 "num_base_bdevs_discovered": 2, 00:16:33.549 "num_base_bdevs_operational": 2, 00:16:33.549 "base_bdevs_list": [ 00:16:33.549 { 00:16:33.549 "name": "spare", 00:16:33.549 "uuid": "f5df98fb-8bf3-56e0-af06-c1e9b54029ef", 00:16:33.549 "is_configured": true, 00:16:33.549 "data_offset": 256, 00:16:33.549 "data_size": 7936 00:16:33.549 }, 00:16:33.549 { 00:16:33.549 "name": "BaseBdev2", 00:16:33.549 "uuid": "9fd4fefc-a5c0-5e31-8b50-460677fd594d", 00:16:33.549 "is_configured": true, 00:16:33.549 "data_offset": 256, 00:16:33.549 "data_size": 7936 00:16:33.549 } 00:16:33.549 ] 00:16:33.549 }' 00:16:33.549 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.549 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.118 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:34.118 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.118 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.118 [2024-11-18 03:16:37.550465] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:34.118 [2024-11-18 03:16:37.550545] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:34.118 [2024-11-18 03:16:37.550648] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:34.118 [2024-11-18 03:16:37.550743] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:34.118 [2024-11-18 03:16:37.550817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:34.118 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.118 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.118 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:16:34.118 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.118 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.118 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.118 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:34.118 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:34.118 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:34.118 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:34.118 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:34.118 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:34.118 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:34.118 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:34.118 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:34.118 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:34.118 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:34.118 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:34.118 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:34.378 /dev/nbd0 00:16:34.378 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:34.378 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:34.378 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:34.378 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:16:34.378 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:34.378 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:34.378 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:34.378 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:16:34.378 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:34.378 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:34.378 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:34.378 1+0 records in 00:16:34.378 1+0 records out 00:16:34.378 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466211 s, 8.8 MB/s 00:16:34.378 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.378 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:16:34.378 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.378 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:34.378 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:16:34.378 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:34.378 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:34.378 03:16:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:34.637 /dev/nbd1 00:16:34.638 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:34.638 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:34.638 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:34.638 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:16:34.638 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:34.638 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:34.638 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:34.638 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:16:34.638 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:34.638 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:34.638 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:34.638 1+0 records in 00:16:34.638 1+0 records out 00:16:34.638 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393029 s, 10.4 MB/s 00:16:34.638 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.638 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:16:34.638 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.638 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:34.638 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:16:34.638 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:34.638 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:34.638 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:34.638 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:34.638 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:34.638 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:34.638 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:34.638 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:34.638 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:34.638 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:34.898 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:34.898 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:34.898 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:34.898 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:34.898 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:34.898 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:34.898 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:34.898 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:34.898 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:34.898 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:35.157 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:35.157 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:35.157 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:35.157 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:35.157 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:35.157 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:35.157 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:35.157 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:35.157 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:35.157 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:35.158 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.158 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.158 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.158 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:35.158 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.158 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.158 [2024-11-18 03:16:38.631154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:35.158 [2024-11-18 03:16:38.631265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.158 [2024-11-18 03:16:38.631304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:35.158 [2024-11-18 03:16:38.631336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.158 [2024-11-18 03:16:38.633350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.158 [2024-11-18 03:16:38.633390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:35.158 [2024-11-18 03:16:38.633452] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:35.158 [2024-11-18 03:16:38.633505] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:35.158 [2024-11-18 03:16:38.633615] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:35.158 spare 00:16:35.158 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.158 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:35.158 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.158 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.418 [2024-11-18 03:16:38.733513] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:16:35.418 [2024-11-18 03:16:38.733608] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:35.418 [2024-11-18 03:16:38.733772] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:16:35.418 [2024-11-18 03:16:38.733981] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:16:35.418 [2024-11-18 03:16:38.734030] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:16:35.418 [2024-11-18 03:16:38.734180] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.418 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.418 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:35.418 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.418 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.418 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.418 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.418 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:35.418 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.418 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.418 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.418 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.418 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.418 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.418 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.418 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.418 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.418 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.418 "name": "raid_bdev1", 00:16:35.418 "uuid": "74f064b6-df90-477e-86d6-45b66d632098", 00:16:35.418 "strip_size_kb": 0, 00:16:35.418 "state": "online", 00:16:35.418 "raid_level": "raid1", 00:16:35.418 "superblock": true, 00:16:35.418 "num_base_bdevs": 2, 00:16:35.418 "num_base_bdevs_discovered": 2, 00:16:35.418 "num_base_bdevs_operational": 2, 00:16:35.418 "base_bdevs_list": [ 00:16:35.418 { 00:16:35.418 "name": "spare", 00:16:35.418 "uuid": "f5df98fb-8bf3-56e0-af06-c1e9b54029ef", 00:16:35.418 "is_configured": true, 00:16:35.418 "data_offset": 256, 00:16:35.418 "data_size": 7936 00:16:35.418 }, 00:16:35.418 { 00:16:35.418 "name": "BaseBdev2", 00:16:35.418 "uuid": "9fd4fefc-a5c0-5e31-8b50-460677fd594d", 00:16:35.418 "is_configured": true, 00:16:35.418 "data_offset": 256, 00:16:35.418 "data_size": 7936 00:16:35.418 } 00:16:35.418 ] 00:16:35.418 }' 00:16:35.418 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.418 03:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.678 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:35.678 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.678 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:35.678 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:35.678 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.678 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.678 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.678 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.678 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.678 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.678 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.678 "name": "raid_bdev1", 00:16:35.678 "uuid": "74f064b6-df90-477e-86d6-45b66d632098", 00:16:35.678 "strip_size_kb": 0, 00:16:35.678 "state": "online", 00:16:35.678 "raid_level": "raid1", 00:16:35.678 "superblock": true, 00:16:35.678 "num_base_bdevs": 2, 00:16:35.678 "num_base_bdevs_discovered": 2, 00:16:35.678 "num_base_bdevs_operational": 2, 00:16:35.678 "base_bdevs_list": [ 00:16:35.678 { 00:16:35.678 "name": "spare", 00:16:35.678 "uuid": "f5df98fb-8bf3-56e0-af06-c1e9b54029ef", 00:16:35.678 "is_configured": true, 00:16:35.678 "data_offset": 256, 00:16:35.678 "data_size": 7936 00:16:35.678 }, 00:16:35.678 { 00:16:35.678 "name": "BaseBdev2", 00:16:35.678 "uuid": "9fd4fefc-a5c0-5e31-8b50-460677fd594d", 00:16:35.678 "is_configured": true, 00:16:35.678 "data_offset": 256, 00:16:35.678 "data_size": 7936 00:16:35.678 } 00:16:35.678 ] 00:16:35.678 }' 00:16:35.678 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.937 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:35.937 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.937 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:35.937 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:35.937 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.937 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.937 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.937 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.937 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.937 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:35.937 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.937 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.937 [2024-11-18 03:16:39.362065] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:35.937 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.937 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:35.937 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.938 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.938 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.938 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.938 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:35.938 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.938 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.938 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.938 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.938 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.938 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.938 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.938 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.938 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.938 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.938 "name": "raid_bdev1", 00:16:35.938 "uuid": "74f064b6-df90-477e-86d6-45b66d632098", 00:16:35.938 "strip_size_kb": 0, 00:16:35.938 "state": "online", 00:16:35.938 "raid_level": "raid1", 00:16:35.938 "superblock": true, 00:16:35.938 "num_base_bdevs": 2, 00:16:35.938 "num_base_bdevs_discovered": 1, 00:16:35.938 "num_base_bdevs_operational": 1, 00:16:35.938 "base_bdevs_list": [ 00:16:35.938 { 00:16:35.938 "name": null, 00:16:35.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.938 "is_configured": false, 00:16:35.938 "data_offset": 0, 00:16:35.938 "data_size": 7936 00:16:35.938 }, 00:16:35.938 { 00:16:35.938 "name": "BaseBdev2", 00:16:35.938 "uuid": "9fd4fefc-a5c0-5e31-8b50-460677fd594d", 00:16:35.938 "is_configured": true, 00:16:35.938 "data_offset": 256, 00:16:35.938 "data_size": 7936 00:16:35.938 } 00:16:35.938 ] 00:16:35.938 }' 00:16:35.938 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.938 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.506 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:36.506 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.507 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.507 [2024-11-18 03:16:39.821297] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:36.507 [2024-11-18 03:16:39.821555] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:36.507 [2024-11-18 03:16:39.821634] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:36.507 [2024-11-18 03:16:39.821702] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:36.507 [2024-11-18 03:16:39.823443] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:16:36.507 [2024-11-18 03:16:39.825540] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:36.507 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.507 03:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:37.447 03:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.447 03:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.447 03:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.447 03:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.447 03:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.447 03:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.447 03:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.447 03:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.447 03:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.447 03:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.447 03:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.447 "name": "raid_bdev1", 00:16:37.447 "uuid": "74f064b6-df90-477e-86d6-45b66d632098", 00:16:37.447 "strip_size_kb": 0, 00:16:37.447 "state": "online", 00:16:37.447 "raid_level": "raid1", 00:16:37.447 "superblock": true, 00:16:37.447 "num_base_bdevs": 2, 00:16:37.447 "num_base_bdevs_discovered": 2, 00:16:37.447 "num_base_bdevs_operational": 2, 00:16:37.447 "process": { 00:16:37.447 "type": "rebuild", 00:16:37.447 "target": "spare", 00:16:37.447 "progress": { 00:16:37.447 "blocks": 2560, 00:16:37.447 "percent": 32 00:16:37.447 } 00:16:37.447 }, 00:16:37.447 "base_bdevs_list": [ 00:16:37.447 { 00:16:37.447 "name": "spare", 00:16:37.447 "uuid": "f5df98fb-8bf3-56e0-af06-c1e9b54029ef", 00:16:37.447 "is_configured": true, 00:16:37.447 "data_offset": 256, 00:16:37.447 "data_size": 7936 00:16:37.447 }, 00:16:37.447 { 00:16:37.447 "name": "BaseBdev2", 00:16:37.447 "uuid": "9fd4fefc-a5c0-5e31-8b50-460677fd594d", 00:16:37.447 "is_configured": true, 00:16:37.447 "data_offset": 256, 00:16:37.447 "data_size": 7936 00:16:37.447 } 00:16:37.447 ] 00:16:37.447 }' 00:16:37.447 03:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.447 03:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.447 03:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.447 03:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.448 03:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:37.448 03:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.448 03:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.448 [2024-11-18 03:16:40.972389] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:37.708 [2024-11-18 03:16:41.030338] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:37.708 [2024-11-18 03:16:41.030456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.708 [2024-11-18 03:16:41.030493] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:37.708 [2024-11-18 03:16:41.030501] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:37.708 03:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.708 03:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:37.708 03:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.708 03:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.708 03:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.708 03:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.708 03:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:37.708 03:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.708 03:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.708 03:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.708 03:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.708 03:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.708 03:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.708 03:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.708 03:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.708 03:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.708 03:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.708 "name": "raid_bdev1", 00:16:37.708 "uuid": "74f064b6-df90-477e-86d6-45b66d632098", 00:16:37.708 "strip_size_kb": 0, 00:16:37.708 "state": "online", 00:16:37.708 "raid_level": "raid1", 00:16:37.708 "superblock": true, 00:16:37.708 "num_base_bdevs": 2, 00:16:37.708 "num_base_bdevs_discovered": 1, 00:16:37.708 "num_base_bdevs_operational": 1, 00:16:37.708 "base_bdevs_list": [ 00:16:37.708 { 00:16:37.708 "name": null, 00:16:37.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.708 "is_configured": false, 00:16:37.708 "data_offset": 0, 00:16:37.708 "data_size": 7936 00:16:37.708 }, 00:16:37.708 { 00:16:37.708 "name": "BaseBdev2", 00:16:37.708 "uuid": "9fd4fefc-a5c0-5e31-8b50-460677fd594d", 00:16:37.708 "is_configured": true, 00:16:37.708 "data_offset": 256, 00:16:37.708 "data_size": 7936 00:16:37.708 } 00:16:37.708 ] 00:16:37.708 }' 00:16:37.708 03:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.708 03:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.968 03:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:37.968 03:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.968 03:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.968 [2024-11-18 03:16:41.476837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:37.968 [2024-11-18 03:16:41.476970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.968 [2024-11-18 03:16:41.477014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:37.968 [2024-11-18 03:16:41.477068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.968 [2024-11-18 03:16:41.477335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.968 [2024-11-18 03:16:41.477390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:37.968 [2024-11-18 03:16:41.477488] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:37.968 [2024-11-18 03:16:41.477528] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:37.968 [2024-11-18 03:16:41.477585] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:37.968 [2024-11-18 03:16:41.477648] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:37.968 [2024-11-18 03:16:41.479358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:16:37.968 [2024-11-18 03:16:41.481256] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:37.968 spare 00:16:37.968 03:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.968 03:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.403 "name": "raid_bdev1", 00:16:39.403 "uuid": "74f064b6-df90-477e-86d6-45b66d632098", 00:16:39.403 "strip_size_kb": 0, 00:16:39.403 "state": "online", 00:16:39.403 "raid_level": "raid1", 00:16:39.403 "superblock": true, 00:16:39.403 "num_base_bdevs": 2, 00:16:39.403 "num_base_bdevs_discovered": 2, 00:16:39.403 "num_base_bdevs_operational": 2, 00:16:39.403 "process": { 00:16:39.403 "type": "rebuild", 00:16:39.403 "target": "spare", 00:16:39.403 "progress": { 00:16:39.403 "blocks": 2560, 00:16:39.403 "percent": 32 00:16:39.403 } 00:16:39.403 }, 00:16:39.403 "base_bdevs_list": [ 00:16:39.403 { 00:16:39.403 "name": "spare", 00:16:39.403 "uuid": "f5df98fb-8bf3-56e0-af06-c1e9b54029ef", 00:16:39.403 "is_configured": true, 00:16:39.403 "data_offset": 256, 00:16:39.403 "data_size": 7936 00:16:39.403 }, 00:16:39.403 { 00:16:39.403 "name": "BaseBdev2", 00:16:39.403 "uuid": "9fd4fefc-a5c0-5e31-8b50-460677fd594d", 00:16:39.403 "is_configured": true, 00:16:39.403 "data_offset": 256, 00:16:39.403 "data_size": 7936 00:16:39.403 } 00:16:39.403 ] 00:16:39.403 }' 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.403 [2024-11-18 03:16:42.636104] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:39.403 [2024-11-18 03:16:42.686063] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:39.403 [2024-11-18 03:16:42.686213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.403 [2024-11-18 03:16:42.686250] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:39.403 [2024-11-18 03:16:42.686274] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.403 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.403 "name": "raid_bdev1", 00:16:39.403 "uuid": "74f064b6-df90-477e-86d6-45b66d632098", 00:16:39.403 "strip_size_kb": 0, 00:16:39.403 "state": "online", 00:16:39.403 "raid_level": "raid1", 00:16:39.403 "superblock": true, 00:16:39.403 "num_base_bdevs": 2, 00:16:39.404 "num_base_bdevs_discovered": 1, 00:16:39.404 "num_base_bdevs_operational": 1, 00:16:39.404 "base_bdevs_list": [ 00:16:39.404 { 00:16:39.404 "name": null, 00:16:39.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.404 "is_configured": false, 00:16:39.404 "data_offset": 0, 00:16:39.404 "data_size": 7936 00:16:39.404 }, 00:16:39.404 { 00:16:39.404 "name": "BaseBdev2", 00:16:39.404 "uuid": "9fd4fefc-a5c0-5e31-8b50-460677fd594d", 00:16:39.404 "is_configured": true, 00:16:39.404 "data_offset": 256, 00:16:39.404 "data_size": 7936 00:16:39.404 } 00:16:39.404 ] 00:16:39.404 }' 00:16:39.404 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.404 03:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.664 03:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:39.664 03:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.664 03:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:39.664 03:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:39.664 03:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.664 03:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.664 03:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.664 03:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.664 03:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.664 03:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.664 03:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.664 "name": "raid_bdev1", 00:16:39.664 "uuid": "74f064b6-df90-477e-86d6-45b66d632098", 00:16:39.664 "strip_size_kb": 0, 00:16:39.664 "state": "online", 00:16:39.664 "raid_level": "raid1", 00:16:39.664 "superblock": true, 00:16:39.664 "num_base_bdevs": 2, 00:16:39.664 "num_base_bdevs_discovered": 1, 00:16:39.664 "num_base_bdevs_operational": 1, 00:16:39.664 "base_bdevs_list": [ 00:16:39.664 { 00:16:39.664 "name": null, 00:16:39.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.664 "is_configured": false, 00:16:39.664 "data_offset": 0, 00:16:39.664 "data_size": 7936 00:16:39.664 }, 00:16:39.664 { 00:16:39.664 "name": "BaseBdev2", 00:16:39.664 "uuid": "9fd4fefc-a5c0-5e31-8b50-460677fd594d", 00:16:39.664 "is_configured": true, 00:16:39.664 "data_offset": 256, 00:16:39.664 "data_size": 7936 00:16:39.664 } 00:16:39.664 ] 00:16:39.664 }' 00:16:39.664 03:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.664 03:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:39.664 03:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.924 03:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:39.924 03:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:39.924 03:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.924 03:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.924 03:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.924 03:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:39.924 03:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.924 03:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.924 [2024-11-18 03:16:43.288393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:39.924 [2024-11-18 03:16:43.288457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.924 [2024-11-18 03:16:43.288477] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:39.924 [2024-11-18 03:16:43.288488] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.924 [2024-11-18 03:16:43.288693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.924 [2024-11-18 03:16:43.288711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:39.924 [2024-11-18 03:16:43.288760] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:39.924 [2024-11-18 03:16:43.288778] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:39.924 [2024-11-18 03:16:43.288786] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:39.924 [2024-11-18 03:16:43.288808] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:39.924 BaseBdev1 00:16:39.924 03:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.924 03:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:40.860 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:40.860 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.860 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.860 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.860 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.860 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:40.860 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.860 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.860 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.860 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.860 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.860 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.860 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.860 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:40.860 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.860 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.860 "name": "raid_bdev1", 00:16:40.860 "uuid": "74f064b6-df90-477e-86d6-45b66d632098", 00:16:40.860 "strip_size_kb": 0, 00:16:40.860 "state": "online", 00:16:40.860 "raid_level": "raid1", 00:16:40.860 "superblock": true, 00:16:40.860 "num_base_bdevs": 2, 00:16:40.860 "num_base_bdevs_discovered": 1, 00:16:40.860 "num_base_bdevs_operational": 1, 00:16:40.860 "base_bdevs_list": [ 00:16:40.860 { 00:16:40.860 "name": null, 00:16:40.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.860 "is_configured": false, 00:16:40.860 "data_offset": 0, 00:16:40.860 "data_size": 7936 00:16:40.860 }, 00:16:40.860 { 00:16:40.860 "name": "BaseBdev2", 00:16:40.860 "uuid": "9fd4fefc-a5c0-5e31-8b50-460677fd594d", 00:16:40.860 "is_configured": true, 00:16:40.860 "data_offset": 256, 00:16:40.860 "data_size": 7936 00:16:40.860 } 00:16:40.860 ] 00:16:40.860 }' 00:16:40.860 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.860 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.427 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:41.427 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.427 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:41.427 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:41.427 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.427 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.427 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.427 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.427 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.427 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.427 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.427 "name": "raid_bdev1", 00:16:41.427 "uuid": "74f064b6-df90-477e-86d6-45b66d632098", 00:16:41.427 "strip_size_kb": 0, 00:16:41.427 "state": "online", 00:16:41.427 "raid_level": "raid1", 00:16:41.427 "superblock": true, 00:16:41.427 "num_base_bdevs": 2, 00:16:41.427 "num_base_bdevs_discovered": 1, 00:16:41.427 "num_base_bdevs_operational": 1, 00:16:41.427 "base_bdevs_list": [ 00:16:41.427 { 00:16:41.427 "name": null, 00:16:41.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.427 "is_configured": false, 00:16:41.427 "data_offset": 0, 00:16:41.427 "data_size": 7936 00:16:41.427 }, 00:16:41.427 { 00:16:41.427 "name": "BaseBdev2", 00:16:41.427 "uuid": "9fd4fefc-a5c0-5e31-8b50-460677fd594d", 00:16:41.427 "is_configured": true, 00:16:41.427 "data_offset": 256, 00:16:41.427 "data_size": 7936 00:16:41.427 } 00:16:41.427 ] 00:16:41.427 }' 00:16:41.427 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.427 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:41.427 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.428 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:41.428 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:41.428 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:16:41.428 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:41.428 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:41.428 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:41.428 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:41.428 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:41.428 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:41.428 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.428 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.428 [2024-11-18 03:16:44.881670] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:41.428 [2024-11-18 03:16:44.881883] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:41.428 [2024-11-18 03:16:44.881939] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:41.428 request: 00:16:41.428 { 00:16:41.428 "base_bdev": "BaseBdev1", 00:16:41.428 "raid_bdev": "raid_bdev1", 00:16:41.428 "method": "bdev_raid_add_base_bdev", 00:16:41.428 "req_id": 1 00:16:41.428 } 00:16:41.428 Got JSON-RPC error response 00:16:41.428 response: 00:16:41.428 { 00:16:41.428 "code": -22, 00:16:41.428 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:41.428 } 00:16:41.428 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:41.428 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:16:41.428 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:41.428 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:41.428 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:41.428 03:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:42.362 03:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:42.362 03:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.362 03:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.363 03:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.363 03:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.363 03:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:42.363 03:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.363 03:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.363 03:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.363 03:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.363 03:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.363 03:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.363 03:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.363 03:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:42.363 03:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.621 03:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.621 "name": "raid_bdev1", 00:16:42.621 "uuid": "74f064b6-df90-477e-86d6-45b66d632098", 00:16:42.621 "strip_size_kb": 0, 00:16:42.621 "state": "online", 00:16:42.621 "raid_level": "raid1", 00:16:42.621 "superblock": true, 00:16:42.621 "num_base_bdevs": 2, 00:16:42.621 "num_base_bdevs_discovered": 1, 00:16:42.621 "num_base_bdevs_operational": 1, 00:16:42.621 "base_bdevs_list": [ 00:16:42.621 { 00:16:42.621 "name": null, 00:16:42.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.621 "is_configured": false, 00:16:42.621 "data_offset": 0, 00:16:42.621 "data_size": 7936 00:16:42.621 }, 00:16:42.621 { 00:16:42.621 "name": "BaseBdev2", 00:16:42.621 "uuid": "9fd4fefc-a5c0-5e31-8b50-460677fd594d", 00:16:42.621 "is_configured": true, 00:16:42.621 "data_offset": 256, 00:16:42.621 "data_size": 7936 00:16:42.621 } 00:16:42.621 ] 00:16:42.621 }' 00:16:42.621 03:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.621 03:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:42.881 03:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:42.881 03:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.881 03:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:42.881 03:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:42.881 03:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.881 03:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.881 03:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.881 03:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.881 03:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:42.881 03:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.881 03:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.881 "name": "raid_bdev1", 00:16:42.881 "uuid": "74f064b6-df90-477e-86d6-45b66d632098", 00:16:42.881 "strip_size_kb": 0, 00:16:42.881 "state": "online", 00:16:42.881 "raid_level": "raid1", 00:16:42.881 "superblock": true, 00:16:42.881 "num_base_bdevs": 2, 00:16:42.881 "num_base_bdevs_discovered": 1, 00:16:42.881 "num_base_bdevs_operational": 1, 00:16:42.881 "base_bdevs_list": [ 00:16:42.881 { 00:16:42.881 "name": null, 00:16:42.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.881 "is_configured": false, 00:16:42.881 "data_offset": 0, 00:16:42.881 "data_size": 7936 00:16:42.881 }, 00:16:42.881 { 00:16:42.881 "name": "BaseBdev2", 00:16:42.881 "uuid": "9fd4fefc-a5c0-5e31-8b50-460677fd594d", 00:16:42.881 "is_configured": true, 00:16:42.881 "data_offset": 256, 00:16:42.881 "data_size": 7936 00:16:42.881 } 00:16:42.881 ] 00:16:42.881 }' 00:16:42.881 03:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.881 03:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:42.881 03:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.141 03:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:43.141 03:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 98205 00:16:43.141 03:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 98205 ']' 00:16:43.141 03:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 98205 00:16:43.141 03:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:16:43.141 03:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:43.141 03:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98205 00:16:43.141 03:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:43.141 03:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:43.141 03:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98205' 00:16:43.141 killing process with pid 98205 00:16:43.141 Received shutdown signal, test time was about 60.000000 seconds 00:16:43.141 00:16:43.141 Latency(us) 00:16:43.141 [2024-11-18T03:16:46.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.141 [2024-11-18T03:16:46.718Z] =================================================================================================================== 00:16:43.141 [2024-11-18T03:16:46.718Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:43.141 03:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 98205 00:16:43.141 [2024-11-18 03:16:46.537702] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:43.141 [2024-11-18 03:16:46.537846] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.141 03:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 98205 00:16:43.141 [2024-11-18 03:16:46.537897] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:43.141 [2024-11-18 03:16:46.537907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:16:43.141 [2024-11-18 03:16:46.571680] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:43.401 03:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:16:43.401 00:16:43.401 real 0m18.301s 00:16:43.401 user 0m24.473s 00:16:43.401 sys 0m2.389s 00:16:43.401 03:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:43.401 ************************************ 00:16:43.401 END TEST raid_rebuild_test_sb_md_separate 00:16:43.401 ************************************ 00:16:43.401 03:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.401 03:16:46 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:16:43.401 03:16:46 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:16:43.401 03:16:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:43.401 03:16:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:43.401 03:16:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:43.401 ************************************ 00:16:43.401 START TEST raid_state_function_test_sb_md_interleaved 00:16:43.401 ************************************ 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:43.401 Process raid pid: 98885 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=98885 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 98885' 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 98885 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 98885 ']' 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:43.401 03:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.401 [2024-11-18 03:16:46.958627] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:43.401 [2024-11-18 03:16:46.958872] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.660 [2024-11-18 03:16:47.121853] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.660 [2024-11-18 03:16:47.172279] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.660 [2024-11-18 03:16:47.214771] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:43.660 [2024-11-18 03:16:47.214910] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:44.596 03:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:44.596 03:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:44.596 03:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:44.596 03:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.596 03:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.596 [2024-11-18 03:16:47.808242] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:44.596 [2024-11-18 03:16:47.808345] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:44.596 [2024-11-18 03:16:47.808378] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:44.596 [2024-11-18 03:16:47.808403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:44.596 03:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.596 03:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:44.596 03:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:44.596 03:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:44.596 03:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.596 03:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.596 03:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:44.596 03:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.596 03:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.596 03:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.596 03:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.596 03:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.596 03:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.596 03:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.596 03:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.596 03:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.596 03:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.596 "name": "Existed_Raid", 00:16:44.596 "uuid": "ba51dc16-4e63-44e5-b50b-79eafbdde575", 00:16:44.597 "strip_size_kb": 0, 00:16:44.597 "state": "configuring", 00:16:44.597 "raid_level": "raid1", 00:16:44.597 "superblock": true, 00:16:44.597 "num_base_bdevs": 2, 00:16:44.597 "num_base_bdevs_discovered": 0, 00:16:44.597 "num_base_bdevs_operational": 2, 00:16:44.597 "base_bdevs_list": [ 00:16:44.597 { 00:16:44.597 "name": "BaseBdev1", 00:16:44.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.597 "is_configured": false, 00:16:44.597 "data_offset": 0, 00:16:44.597 "data_size": 0 00:16:44.597 }, 00:16:44.597 { 00:16:44.597 "name": "BaseBdev2", 00:16:44.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.597 "is_configured": false, 00:16:44.597 "data_offset": 0, 00:16:44.597 "data_size": 0 00:16:44.597 } 00:16:44.597 ] 00:16:44.597 }' 00:16:44.597 03:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.597 03:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.856 [2024-11-18 03:16:48.235433] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:44.856 [2024-11-18 03:16:48.235531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.856 [2024-11-18 03:16:48.247438] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:44.856 [2024-11-18 03:16:48.247537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:44.856 [2024-11-18 03:16:48.247575] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:44.856 [2024-11-18 03:16:48.247599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.856 [2024-11-18 03:16:48.268440] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:44.856 BaseBdev1 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.856 [ 00:16:44.856 { 00:16:44.856 "name": "BaseBdev1", 00:16:44.856 "aliases": [ 00:16:44.856 "cd05552c-0ab6-4704-908c-caea319a55f5" 00:16:44.856 ], 00:16:44.856 "product_name": "Malloc disk", 00:16:44.856 "block_size": 4128, 00:16:44.856 "num_blocks": 8192, 00:16:44.856 "uuid": "cd05552c-0ab6-4704-908c-caea319a55f5", 00:16:44.856 "md_size": 32, 00:16:44.856 "md_interleave": true, 00:16:44.856 "dif_type": 0, 00:16:44.856 "assigned_rate_limits": { 00:16:44.856 "rw_ios_per_sec": 0, 00:16:44.856 "rw_mbytes_per_sec": 0, 00:16:44.856 "r_mbytes_per_sec": 0, 00:16:44.856 "w_mbytes_per_sec": 0 00:16:44.856 }, 00:16:44.856 "claimed": true, 00:16:44.856 "claim_type": "exclusive_write", 00:16:44.856 "zoned": false, 00:16:44.856 "supported_io_types": { 00:16:44.856 "read": true, 00:16:44.856 "write": true, 00:16:44.856 "unmap": true, 00:16:44.856 "flush": true, 00:16:44.856 "reset": true, 00:16:44.856 "nvme_admin": false, 00:16:44.856 "nvme_io": false, 00:16:44.856 "nvme_io_md": false, 00:16:44.856 "write_zeroes": true, 00:16:44.856 "zcopy": true, 00:16:44.856 "get_zone_info": false, 00:16:44.856 "zone_management": false, 00:16:44.856 "zone_append": false, 00:16:44.856 "compare": false, 00:16:44.856 "compare_and_write": false, 00:16:44.856 "abort": true, 00:16:44.856 "seek_hole": false, 00:16:44.856 "seek_data": false, 00:16:44.856 "copy": true, 00:16:44.856 "nvme_iov_md": false 00:16:44.856 }, 00:16:44.856 "memory_domains": [ 00:16:44.856 { 00:16:44.856 "dma_device_id": "system", 00:16:44.856 "dma_device_type": 1 00:16:44.856 }, 00:16:44.856 { 00:16:44.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.856 "dma_device_type": 2 00:16:44.856 } 00:16:44.856 ], 00:16:44.856 "driver_specific": {} 00:16:44.856 } 00:16:44.856 ] 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.856 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.857 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.857 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.857 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.857 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.857 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.857 "name": "Existed_Raid", 00:16:44.857 "uuid": "47f3cf4f-9810-4c65-93d1-0a954b319519", 00:16:44.857 "strip_size_kb": 0, 00:16:44.857 "state": "configuring", 00:16:44.857 "raid_level": "raid1", 00:16:44.857 "superblock": true, 00:16:44.857 "num_base_bdevs": 2, 00:16:44.857 "num_base_bdevs_discovered": 1, 00:16:44.857 "num_base_bdevs_operational": 2, 00:16:44.857 "base_bdevs_list": [ 00:16:44.857 { 00:16:44.857 "name": "BaseBdev1", 00:16:44.857 "uuid": "cd05552c-0ab6-4704-908c-caea319a55f5", 00:16:44.857 "is_configured": true, 00:16:44.857 "data_offset": 256, 00:16:44.857 "data_size": 7936 00:16:44.857 }, 00:16:44.857 { 00:16:44.857 "name": "BaseBdev2", 00:16:44.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.857 "is_configured": false, 00:16:44.857 "data_offset": 0, 00:16:44.857 "data_size": 0 00:16:44.857 } 00:16:44.857 ] 00:16:44.857 }' 00:16:44.857 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.857 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.426 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:45.426 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.426 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.426 [2024-11-18 03:16:48.771650] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:45.426 [2024-11-18 03:16:48.771761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:16:45.426 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.426 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:45.426 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.426 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.426 [2024-11-18 03:16:48.783723] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:45.426 [2024-11-18 03:16:48.785639] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:45.426 [2024-11-18 03:16:48.785828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:45.426 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.426 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:45.426 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:45.426 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:45.426 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:45.426 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:45.426 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:45.426 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:45.426 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:45.426 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.426 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.426 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.426 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.426 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.426 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.426 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.426 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.426 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.426 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.426 "name": "Existed_Raid", 00:16:45.426 "uuid": "5fd45c5d-e6d2-40eb-82aa-cbeda671f173", 00:16:45.426 "strip_size_kb": 0, 00:16:45.426 "state": "configuring", 00:16:45.426 "raid_level": "raid1", 00:16:45.426 "superblock": true, 00:16:45.426 "num_base_bdevs": 2, 00:16:45.426 "num_base_bdevs_discovered": 1, 00:16:45.426 "num_base_bdevs_operational": 2, 00:16:45.426 "base_bdevs_list": [ 00:16:45.426 { 00:16:45.426 "name": "BaseBdev1", 00:16:45.426 "uuid": "cd05552c-0ab6-4704-908c-caea319a55f5", 00:16:45.426 "is_configured": true, 00:16:45.426 "data_offset": 256, 00:16:45.426 "data_size": 7936 00:16:45.426 }, 00:16:45.426 { 00:16:45.426 "name": "BaseBdev2", 00:16:45.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.427 "is_configured": false, 00:16:45.427 "data_offset": 0, 00:16:45.427 "data_size": 0 00:16:45.427 } 00:16:45.427 ] 00:16:45.427 }' 00:16:45.427 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.427 03:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.687 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:16:45.687 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.687 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.687 [2024-11-18 03:16:49.250496] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:45.687 [2024-11-18 03:16:49.250777] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:45.687 [2024-11-18 03:16:49.250845] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:45.687 [2024-11-18 03:16:49.251048] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:45.687 [2024-11-18 03:16:49.251174] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:45.687 [2024-11-18 03:16:49.251227] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:16:45.687 [2024-11-18 03:16:49.251337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.687 BaseBdev2 00:16:45.687 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.687 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:45.687 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:45.687 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:45.687 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:16:45.687 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:45.687 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:45.687 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:45.687 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.687 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.946 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.946 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:45.946 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.946 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.946 [ 00:16:45.946 { 00:16:45.946 "name": "BaseBdev2", 00:16:45.946 "aliases": [ 00:16:45.946 "b051fc4c-625c-408c-aca9-336c4950f708" 00:16:45.946 ], 00:16:45.946 "product_name": "Malloc disk", 00:16:45.946 "block_size": 4128, 00:16:45.946 "num_blocks": 8192, 00:16:45.946 "uuid": "b051fc4c-625c-408c-aca9-336c4950f708", 00:16:45.946 "md_size": 32, 00:16:45.946 "md_interleave": true, 00:16:45.946 "dif_type": 0, 00:16:45.946 "assigned_rate_limits": { 00:16:45.946 "rw_ios_per_sec": 0, 00:16:45.946 "rw_mbytes_per_sec": 0, 00:16:45.946 "r_mbytes_per_sec": 0, 00:16:45.946 "w_mbytes_per_sec": 0 00:16:45.946 }, 00:16:45.946 "claimed": true, 00:16:45.946 "claim_type": "exclusive_write", 00:16:45.946 "zoned": false, 00:16:45.946 "supported_io_types": { 00:16:45.946 "read": true, 00:16:45.946 "write": true, 00:16:45.946 "unmap": true, 00:16:45.946 "flush": true, 00:16:45.946 "reset": true, 00:16:45.946 "nvme_admin": false, 00:16:45.946 "nvme_io": false, 00:16:45.946 "nvme_io_md": false, 00:16:45.946 "write_zeroes": true, 00:16:45.946 "zcopy": true, 00:16:45.946 "get_zone_info": false, 00:16:45.946 "zone_management": false, 00:16:45.946 "zone_append": false, 00:16:45.946 "compare": false, 00:16:45.946 "compare_and_write": false, 00:16:45.946 "abort": true, 00:16:45.946 "seek_hole": false, 00:16:45.946 "seek_data": false, 00:16:45.946 "copy": true, 00:16:45.946 "nvme_iov_md": false 00:16:45.946 }, 00:16:45.946 "memory_domains": [ 00:16:45.946 { 00:16:45.946 "dma_device_id": "system", 00:16:45.946 "dma_device_type": 1 00:16:45.946 }, 00:16:45.946 { 00:16:45.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.946 "dma_device_type": 2 00:16:45.946 } 00:16:45.946 ], 00:16:45.946 "driver_specific": {} 00:16:45.946 } 00:16:45.946 ] 00:16:45.947 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.947 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:16:45.947 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:45.947 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:45.947 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:45.947 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:45.947 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.947 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:45.947 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:45.947 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:45.947 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.947 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.947 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.947 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.947 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.947 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.947 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.947 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.947 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.947 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.947 "name": "Existed_Raid", 00:16:45.947 "uuid": "5fd45c5d-e6d2-40eb-82aa-cbeda671f173", 00:16:45.947 "strip_size_kb": 0, 00:16:45.947 "state": "online", 00:16:45.947 "raid_level": "raid1", 00:16:45.947 "superblock": true, 00:16:45.947 "num_base_bdevs": 2, 00:16:45.947 "num_base_bdevs_discovered": 2, 00:16:45.947 "num_base_bdevs_operational": 2, 00:16:45.947 "base_bdevs_list": [ 00:16:45.947 { 00:16:45.947 "name": "BaseBdev1", 00:16:45.947 "uuid": "cd05552c-0ab6-4704-908c-caea319a55f5", 00:16:45.947 "is_configured": true, 00:16:45.947 "data_offset": 256, 00:16:45.947 "data_size": 7936 00:16:45.947 }, 00:16:45.947 { 00:16:45.947 "name": "BaseBdev2", 00:16:45.947 "uuid": "b051fc4c-625c-408c-aca9-336c4950f708", 00:16:45.947 "is_configured": true, 00:16:45.947 "data_offset": 256, 00:16:45.947 "data_size": 7936 00:16:45.947 } 00:16:45.947 ] 00:16:45.947 }' 00:16:45.947 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.947 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.206 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:46.206 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:46.206 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:46.206 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:46.206 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:46.206 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:46.206 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:46.206 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:46.206 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.206 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.206 [2024-11-18 03:16:49.750047] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:46.206 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.206 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:46.206 "name": "Existed_Raid", 00:16:46.206 "aliases": [ 00:16:46.206 "5fd45c5d-e6d2-40eb-82aa-cbeda671f173" 00:16:46.206 ], 00:16:46.206 "product_name": "Raid Volume", 00:16:46.206 "block_size": 4128, 00:16:46.206 "num_blocks": 7936, 00:16:46.206 "uuid": "5fd45c5d-e6d2-40eb-82aa-cbeda671f173", 00:16:46.206 "md_size": 32, 00:16:46.206 "md_interleave": true, 00:16:46.206 "dif_type": 0, 00:16:46.206 "assigned_rate_limits": { 00:16:46.206 "rw_ios_per_sec": 0, 00:16:46.206 "rw_mbytes_per_sec": 0, 00:16:46.206 "r_mbytes_per_sec": 0, 00:16:46.206 "w_mbytes_per_sec": 0 00:16:46.206 }, 00:16:46.206 "claimed": false, 00:16:46.206 "zoned": false, 00:16:46.206 "supported_io_types": { 00:16:46.206 "read": true, 00:16:46.206 "write": true, 00:16:46.206 "unmap": false, 00:16:46.206 "flush": false, 00:16:46.206 "reset": true, 00:16:46.206 "nvme_admin": false, 00:16:46.206 "nvme_io": false, 00:16:46.206 "nvme_io_md": false, 00:16:46.206 "write_zeroes": true, 00:16:46.206 "zcopy": false, 00:16:46.206 "get_zone_info": false, 00:16:46.206 "zone_management": false, 00:16:46.206 "zone_append": false, 00:16:46.206 "compare": false, 00:16:46.206 "compare_and_write": false, 00:16:46.206 "abort": false, 00:16:46.206 "seek_hole": false, 00:16:46.206 "seek_data": false, 00:16:46.206 "copy": false, 00:16:46.206 "nvme_iov_md": false 00:16:46.206 }, 00:16:46.206 "memory_domains": [ 00:16:46.206 { 00:16:46.206 "dma_device_id": "system", 00:16:46.206 "dma_device_type": 1 00:16:46.206 }, 00:16:46.206 { 00:16:46.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.206 "dma_device_type": 2 00:16:46.206 }, 00:16:46.206 { 00:16:46.206 "dma_device_id": "system", 00:16:46.206 "dma_device_type": 1 00:16:46.206 }, 00:16:46.206 { 00:16:46.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.206 "dma_device_type": 2 00:16:46.206 } 00:16:46.206 ], 00:16:46.206 "driver_specific": { 00:16:46.206 "raid": { 00:16:46.206 "uuid": "5fd45c5d-e6d2-40eb-82aa-cbeda671f173", 00:16:46.206 "strip_size_kb": 0, 00:16:46.206 "state": "online", 00:16:46.206 "raid_level": "raid1", 00:16:46.206 "superblock": true, 00:16:46.206 "num_base_bdevs": 2, 00:16:46.206 "num_base_bdevs_discovered": 2, 00:16:46.206 "num_base_bdevs_operational": 2, 00:16:46.206 "base_bdevs_list": [ 00:16:46.206 { 00:16:46.206 "name": "BaseBdev1", 00:16:46.206 "uuid": "cd05552c-0ab6-4704-908c-caea319a55f5", 00:16:46.206 "is_configured": true, 00:16:46.206 "data_offset": 256, 00:16:46.206 "data_size": 7936 00:16:46.206 }, 00:16:46.206 { 00:16:46.206 "name": "BaseBdev2", 00:16:46.206 "uuid": "b051fc4c-625c-408c-aca9-336c4950f708", 00:16:46.206 "is_configured": true, 00:16:46.206 "data_offset": 256, 00:16:46.206 "data_size": 7936 00:16:46.206 } 00:16:46.206 ] 00:16:46.206 } 00:16:46.206 } 00:16:46.206 }' 00:16:46.206 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:46.466 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:46.466 BaseBdev2' 00:16:46.466 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.466 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.467 [2024-11-18 03:16:49.977411] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.467 03:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.467 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.467 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.727 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.727 "name": "Existed_Raid", 00:16:46.727 "uuid": "5fd45c5d-e6d2-40eb-82aa-cbeda671f173", 00:16:46.727 "strip_size_kb": 0, 00:16:46.727 "state": "online", 00:16:46.727 "raid_level": "raid1", 00:16:46.727 "superblock": true, 00:16:46.727 "num_base_bdevs": 2, 00:16:46.727 "num_base_bdevs_discovered": 1, 00:16:46.727 "num_base_bdevs_operational": 1, 00:16:46.727 "base_bdevs_list": [ 00:16:46.727 { 00:16:46.727 "name": null, 00:16:46.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.727 "is_configured": false, 00:16:46.727 "data_offset": 0, 00:16:46.727 "data_size": 7936 00:16:46.727 }, 00:16:46.727 { 00:16:46.727 "name": "BaseBdev2", 00:16:46.727 "uuid": "b051fc4c-625c-408c-aca9-336c4950f708", 00:16:46.727 "is_configured": true, 00:16:46.727 "data_offset": 256, 00:16:46.727 "data_size": 7936 00:16:46.727 } 00:16:46.727 ] 00:16:46.727 }' 00:16:46.727 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.727 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.988 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:46.988 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:46.988 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:46.988 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.988 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.988 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.988 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.988 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:46.988 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:46.988 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:46.988 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.988 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.988 [2024-11-18 03:16:50.460464] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:46.988 [2024-11-18 03:16:50.460628] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:46.988 [2024-11-18 03:16:50.472682] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:46.988 [2024-11-18 03:16:50.472826] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:46.988 [2024-11-18 03:16:50.472884] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:16:46.988 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.988 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:46.988 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:46.988 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.988 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.988 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.988 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:46.988 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.988 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:46.988 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:46.988 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:46.988 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 98885 00:16:46.988 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 98885 ']' 00:16:46.988 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 98885 00:16:46.988 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:16:46.989 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:46.989 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98885 00:16:46.989 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:46.989 killing process with pid 98885 00:16:46.989 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:46.989 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98885' 00:16:46.989 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 98885 00:16:46.989 [2024-11-18 03:16:50.560629] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:46.989 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 98885 00:16:46.989 [2024-11-18 03:16:50.561643] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:47.248 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:16:47.248 ************************************ 00:16:47.248 END TEST raid_state_function_test_sb_md_interleaved 00:16:47.248 ************************************ 00:16:47.248 00:16:47.248 real 0m3.938s 00:16:47.248 user 0m6.197s 00:16:47.248 sys 0m0.801s 00:16:47.248 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:47.248 03:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:47.508 03:16:50 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:16:47.508 03:16:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:47.508 03:16:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:47.508 03:16:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:47.508 ************************************ 00:16:47.508 START TEST raid_superblock_test_md_interleaved 00:16:47.508 ************************************ 00:16:47.508 03:16:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:16:47.508 03:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:47.508 03:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:47.508 03:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:47.508 03:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:47.508 03:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:47.508 03:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:47.508 03:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:47.508 03:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:47.508 03:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:47.508 03:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:47.508 03:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:47.508 03:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:47.508 03:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:47.508 03:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:47.508 03:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:47.508 03:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=99122 00:16:47.508 03:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:47.508 03:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 99122 00:16:47.508 03:16:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99122 ']' 00:16:47.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.508 03:16:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.508 03:16:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:47.509 03:16:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.509 03:16:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:47.509 03:16:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:47.509 [2024-11-18 03:16:50.966261] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:47.509 [2024-11-18 03:16:50.966890] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99122 ] 00:16:47.768 [2024-11-18 03:16:51.127424] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.768 [2024-11-18 03:16:51.177759] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.768 [2024-11-18 03:16:51.220151] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.768 [2024-11-18 03:16:51.220271] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:48.338 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:48.338 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:48.338 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:48.338 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:48.338 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:48.338 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:48.338 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:48.338 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:48.338 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:48.338 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:48.338 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:16:48.338 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.338 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.338 malloc1 00:16:48.338 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.338 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:48.338 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.338 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.338 [2024-11-18 03:16:51.822517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:48.338 [2024-11-18 03:16:51.822636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.338 [2024-11-18 03:16:51.822679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:48.338 [2024-11-18 03:16:51.822716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.338 [2024-11-18 03:16:51.824694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.338 [2024-11-18 03:16:51.824773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:48.339 pt1 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.339 malloc2 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.339 [2024-11-18 03:16:51.865109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:48.339 [2024-11-18 03:16:51.865237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.339 [2024-11-18 03:16:51.865280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:48.339 [2024-11-18 03:16:51.865334] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.339 [2024-11-18 03:16:51.867634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.339 [2024-11-18 03:16:51.867726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:48.339 pt2 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.339 [2024-11-18 03:16:51.877129] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:48.339 [2024-11-18 03:16:51.879068] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:48.339 [2024-11-18 03:16:51.879270] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:48.339 [2024-11-18 03:16:51.879323] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:48.339 [2024-11-18 03:16:51.879433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:48.339 [2024-11-18 03:16:51.879548] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:48.339 [2024-11-18 03:16:51.879589] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:48.339 [2024-11-18 03:16:51.879700] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.339 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.599 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.599 "name": "raid_bdev1", 00:16:48.599 "uuid": "55ae6845-5e07-4ade-ac1e-83bca2c0d743", 00:16:48.599 "strip_size_kb": 0, 00:16:48.599 "state": "online", 00:16:48.599 "raid_level": "raid1", 00:16:48.599 "superblock": true, 00:16:48.599 "num_base_bdevs": 2, 00:16:48.599 "num_base_bdevs_discovered": 2, 00:16:48.599 "num_base_bdevs_operational": 2, 00:16:48.599 "base_bdevs_list": [ 00:16:48.599 { 00:16:48.599 "name": "pt1", 00:16:48.599 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:48.599 "is_configured": true, 00:16:48.599 "data_offset": 256, 00:16:48.599 "data_size": 7936 00:16:48.599 }, 00:16:48.599 { 00:16:48.599 "name": "pt2", 00:16:48.599 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:48.599 "is_configured": true, 00:16:48.599 "data_offset": 256, 00:16:48.599 "data_size": 7936 00:16:48.599 } 00:16:48.599 ] 00:16:48.599 }' 00:16:48.599 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.599 03:16:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.859 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:48.859 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:48.859 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:48.859 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:48.859 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:48.859 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:48.859 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:48.859 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:48.859 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.859 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.859 [2024-11-18 03:16:52.272822] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:48.859 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.859 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:48.859 "name": "raid_bdev1", 00:16:48.859 "aliases": [ 00:16:48.859 "55ae6845-5e07-4ade-ac1e-83bca2c0d743" 00:16:48.859 ], 00:16:48.859 "product_name": "Raid Volume", 00:16:48.859 "block_size": 4128, 00:16:48.859 "num_blocks": 7936, 00:16:48.859 "uuid": "55ae6845-5e07-4ade-ac1e-83bca2c0d743", 00:16:48.859 "md_size": 32, 00:16:48.859 "md_interleave": true, 00:16:48.859 "dif_type": 0, 00:16:48.859 "assigned_rate_limits": { 00:16:48.859 "rw_ios_per_sec": 0, 00:16:48.859 "rw_mbytes_per_sec": 0, 00:16:48.859 "r_mbytes_per_sec": 0, 00:16:48.859 "w_mbytes_per_sec": 0 00:16:48.859 }, 00:16:48.859 "claimed": false, 00:16:48.859 "zoned": false, 00:16:48.859 "supported_io_types": { 00:16:48.859 "read": true, 00:16:48.859 "write": true, 00:16:48.859 "unmap": false, 00:16:48.859 "flush": false, 00:16:48.859 "reset": true, 00:16:48.859 "nvme_admin": false, 00:16:48.859 "nvme_io": false, 00:16:48.859 "nvme_io_md": false, 00:16:48.859 "write_zeroes": true, 00:16:48.859 "zcopy": false, 00:16:48.859 "get_zone_info": false, 00:16:48.859 "zone_management": false, 00:16:48.859 "zone_append": false, 00:16:48.859 "compare": false, 00:16:48.859 "compare_and_write": false, 00:16:48.859 "abort": false, 00:16:48.859 "seek_hole": false, 00:16:48.859 "seek_data": false, 00:16:48.859 "copy": false, 00:16:48.859 "nvme_iov_md": false 00:16:48.859 }, 00:16:48.859 "memory_domains": [ 00:16:48.859 { 00:16:48.859 "dma_device_id": "system", 00:16:48.859 "dma_device_type": 1 00:16:48.859 }, 00:16:48.859 { 00:16:48.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.859 "dma_device_type": 2 00:16:48.859 }, 00:16:48.859 { 00:16:48.859 "dma_device_id": "system", 00:16:48.859 "dma_device_type": 1 00:16:48.859 }, 00:16:48.859 { 00:16:48.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.859 "dma_device_type": 2 00:16:48.859 } 00:16:48.859 ], 00:16:48.859 "driver_specific": { 00:16:48.859 "raid": { 00:16:48.859 "uuid": "55ae6845-5e07-4ade-ac1e-83bca2c0d743", 00:16:48.859 "strip_size_kb": 0, 00:16:48.859 "state": "online", 00:16:48.859 "raid_level": "raid1", 00:16:48.859 "superblock": true, 00:16:48.859 "num_base_bdevs": 2, 00:16:48.859 "num_base_bdevs_discovered": 2, 00:16:48.859 "num_base_bdevs_operational": 2, 00:16:48.859 "base_bdevs_list": [ 00:16:48.859 { 00:16:48.859 "name": "pt1", 00:16:48.859 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:48.859 "is_configured": true, 00:16:48.859 "data_offset": 256, 00:16:48.859 "data_size": 7936 00:16:48.859 }, 00:16:48.859 { 00:16:48.859 "name": "pt2", 00:16:48.859 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:48.859 "is_configured": true, 00:16:48.859 "data_offset": 256, 00:16:48.859 "data_size": 7936 00:16:48.859 } 00:16:48.859 ] 00:16:48.859 } 00:16:48.859 } 00:16:48.859 }' 00:16:48.859 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:48.859 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:48.859 pt2' 00:16:48.859 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.859 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:48.859 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:48.859 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:48.859 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.859 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.859 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.859 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.120 [2024-11-18 03:16:52.512336] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=55ae6845-5e07-4ade-ac1e-83bca2c0d743 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 55ae6845-5e07-4ade-ac1e-83bca2c0d743 ']' 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.120 [2024-11-18 03:16:52.544045] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:49.120 [2024-11-18 03:16:52.544117] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:49.120 [2024-11-18 03:16:52.544225] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:49.120 [2024-11-18 03:16:52.544329] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:49.120 [2024-11-18 03:16:52.544383] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.120 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.120 [2024-11-18 03:16:52.691779] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:49.120 [2024-11-18 03:16:52.693731] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:49.120 [2024-11-18 03:16:52.693858] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:49.120 [2024-11-18 03:16:52.693943] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:49.120 [2024-11-18 03:16:52.694008] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:49.120 [2024-11-18 03:16:52.694039] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:16:49.380 request: 00:16:49.380 { 00:16:49.380 "name": "raid_bdev1", 00:16:49.380 "raid_level": "raid1", 00:16:49.380 "base_bdevs": [ 00:16:49.380 "malloc1", 00:16:49.380 "malloc2" 00:16:49.380 ], 00:16:49.380 "superblock": false, 00:16:49.380 "method": "bdev_raid_create", 00:16:49.380 "req_id": 1 00:16:49.380 } 00:16:49.380 Got JSON-RPC error response 00:16:49.380 response: 00:16:49.380 { 00:16:49.380 "code": -17, 00:16:49.380 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:49.380 } 00:16:49.380 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:49.380 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:16:49.380 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:49.380 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:49.380 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:49.380 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.380 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:49.380 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.380 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.380 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.380 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:49.380 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:49.380 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:49.380 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.380 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.380 [2024-11-18 03:16:52.759609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:49.380 [2024-11-18 03:16:52.759712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.380 [2024-11-18 03:16:52.759750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:49.380 [2024-11-18 03:16:52.759777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.380 [2024-11-18 03:16:52.761730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.380 [2024-11-18 03:16:52.761801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:49.380 [2024-11-18 03:16:52.761876] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:49.380 [2024-11-18 03:16:52.761940] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:49.380 pt1 00:16:49.380 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.380 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:49.380 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.380 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.380 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:49.381 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:49.381 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:49.381 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.381 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.381 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.381 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.381 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.381 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.381 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.381 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.381 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.381 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.381 "name": "raid_bdev1", 00:16:49.381 "uuid": "55ae6845-5e07-4ade-ac1e-83bca2c0d743", 00:16:49.381 "strip_size_kb": 0, 00:16:49.381 "state": "configuring", 00:16:49.381 "raid_level": "raid1", 00:16:49.381 "superblock": true, 00:16:49.381 "num_base_bdevs": 2, 00:16:49.381 "num_base_bdevs_discovered": 1, 00:16:49.381 "num_base_bdevs_operational": 2, 00:16:49.381 "base_bdevs_list": [ 00:16:49.381 { 00:16:49.381 "name": "pt1", 00:16:49.381 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:49.381 "is_configured": true, 00:16:49.381 "data_offset": 256, 00:16:49.381 "data_size": 7936 00:16:49.381 }, 00:16:49.381 { 00:16:49.381 "name": null, 00:16:49.381 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:49.381 "is_configured": false, 00:16:49.381 "data_offset": 256, 00:16:49.381 "data_size": 7936 00:16:49.381 } 00:16:49.381 ] 00:16:49.381 }' 00:16:49.381 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.381 03:16:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.951 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:49.951 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:49.951 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:49.951 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:49.951 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.951 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.951 [2024-11-18 03:16:53.230931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:49.951 [2024-11-18 03:16:53.231062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.951 [2024-11-18 03:16:53.231132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:49.951 [2024-11-18 03:16:53.231167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.951 [2024-11-18 03:16:53.231400] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.951 [2024-11-18 03:16:53.231449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:49.951 [2024-11-18 03:16:53.231533] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:49.951 [2024-11-18 03:16:53.231583] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:49.951 [2024-11-18 03:16:53.231703] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:49.951 [2024-11-18 03:16:53.231749] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:49.951 [2024-11-18 03:16:53.231859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:49.951 [2024-11-18 03:16:53.231973] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:49.951 [2024-11-18 03:16:53.232014] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:16:49.951 [2024-11-18 03:16:53.232107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.951 pt2 00:16:49.951 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.951 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:49.951 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:49.951 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:49.951 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.951 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.951 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:49.951 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:49.951 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:49.951 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.951 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.951 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.951 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.951 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.951 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.951 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.951 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.951 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.951 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.951 "name": "raid_bdev1", 00:16:49.951 "uuid": "55ae6845-5e07-4ade-ac1e-83bca2c0d743", 00:16:49.951 "strip_size_kb": 0, 00:16:49.951 "state": "online", 00:16:49.951 "raid_level": "raid1", 00:16:49.951 "superblock": true, 00:16:49.951 "num_base_bdevs": 2, 00:16:49.951 "num_base_bdevs_discovered": 2, 00:16:49.951 "num_base_bdevs_operational": 2, 00:16:49.951 "base_bdevs_list": [ 00:16:49.951 { 00:16:49.951 "name": "pt1", 00:16:49.951 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:49.951 "is_configured": true, 00:16:49.951 "data_offset": 256, 00:16:49.951 "data_size": 7936 00:16:49.951 }, 00:16:49.951 { 00:16:49.951 "name": "pt2", 00:16:49.951 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:49.951 "is_configured": true, 00:16:49.951 "data_offset": 256, 00:16:49.951 "data_size": 7936 00:16:49.951 } 00:16:49.951 ] 00:16:49.951 }' 00:16:49.951 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.951 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.211 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:50.211 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:50.211 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:50.211 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:50.211 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:50.211 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:50.211 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:50.211 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.211 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:50.211 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.211 [2024-11-18 03:16:53.726352] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.211 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.211 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:50.211 "name": "raid_bdev1", 00:16:50.211 "aliases": [ 00:16:50.211 "55ae6845-5e07-4ade-ac1e-83bca2c0d743" 00:16:50.211 ], 00:16:50.211 "product_name": "Raid Volume", 00:16:50.211 "block_size": 4128, 00:16:50.211 "num_blocks": 7936, 00:16:50.211 "uuid": "55ae6845-5e07-4ade-ac1e-83bca2c0d743", 00:16:50.211 "md_size": 32, 00:16:50.211 "md_interleave": true, 00:16:50.211 "dif_type": 0, 00:16:50.211 "assigned_rate_limits": { 00:16:50.211 "rw_ios_per_sec": 0, 00:16:50.211 "rw_mbytes_per_sec": 0, 00:16:50.211 "r_mbytes_per_sec": 0, 00:16:50.211 "w_mbytes_per_sec": 0 00:16:50.211 }, 00:16:50.211 "claimed": false, 00:16:50.211 "zoned": false, 00:16:50.211 "supported_io_types": { 00:16:50.211 "read": true, 00:16:50.211 "write": true, 00:16:50.211 "unmap": false, 00:16:50.211 "flush": false, 00:16:50.211 "reset": true, 00:16:50.211 "nvme_admin": false, 00:16:50.211 "nvme_io": false, 00:16:50.211 "nvme_io_md": false, 00:16:50.211 "write_zeroes": true, 00:16:50.211 "zcopy": false, 00:16:50.211 "get_zone_info": false, 00:16:50.211 "zone_management": false, 00:16:50.211 "zone_append": false, 00:16:50.211 "compare": false, 00:16:50.211 "compare_and_write": false, 00:16:50.211 "abort": false, 00:16:50.211 "seek_hole": false, 00:16:50.211 "seek_data": false, 00:16:50.211 "copy": false, 00:16:50.211 "nvme_iov_md": false 00:16:50.211 }, 00:16:50.211 "memory_domains": [ 00:16:50.211 { 00:16:50.211 "dma_device_id": "system", 00:16:50.211 "dma_device_type": 1 00:16:50.211 }, 00:16:50.211 { 00:16:50.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.211 "dma_device_type": 2 00:16:50.211 }, 00:16:50.211 { 00:16:50.211 "dma_device_id": "system", 00:16:50.211 "dma_device_type": 1 00:16:50.211 }, 00:16:50.211 { 00:16:50.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.211 "dma_device_type": 2 00:16:50.211 } 00:16:50.211 ], 00:16:50.211 "driver_specific": { 00:16:50.211 "raid": { 00:16:50.211 "uuid": "55ae6845-5e07-4ade-ac1e-83bca2c0d743", 00:16:50.211 "strip_size_kb": 0, 00:16:50.211 "state": "online", 00:16:50.211 "raid_level": "raid1", 00:16:50.211 "superblock": true, 00:16:50.211 "num_base_bdevs": 2, 00:16:50.211 "num_base_bdevs_discovered": 2, 00:16:50.211 "num_base_bdevs_operational": 2, 00:16:50.211 "base_bdevs_list": [ 00:16:50.211 { 00:16:50.211 "name": "pt1", 00:16:50.211 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:50.211 "is_configured": true, 00:16:50.211 "data_offset": 256, 00:16:50.211 "data_size": 7936 00:16:50.211 }, 00:16:50.211 { 00:16:50.211 "name": "pt2", 00:16:50.211 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.211 "is_configured": true, 00:16:50.211 "data_offset": 256, 00:16:50.211 "data_size": 7936 00:16:50.211 } 00:16:50.211 ] 00:16:50.211 } 00:16:50.211 } 00:16:50.211 }' 00:16:50.211 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:50.471 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:50.471 pt2' 00:16:50.471 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.471 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:50.471 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.471 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:50.471 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.471 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.471 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.471 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.471 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:50.471 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:50.471 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.471 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:50.471 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.471 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.471 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.471 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.471 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:50.471 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:50.471 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:50.471 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:50.471 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.471 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.472 [2024-11-18 03:16:53.961910] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.472 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.472 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 55ae6845-5e07-4ade-ac1e-83bca2c0d743 '!=' 55ae6845-5e07-4ade-ac1e-83bca2c0d743 ']' 00:16:50.472 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:50.472 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:50.472 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:50.472 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:50.472 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.472 03:16:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.472 [2024-11-18 03:16:54.005611] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:50.472 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.472 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:50.472 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.472 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.472 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.472 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.472 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:50.472 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.472 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.472 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.472 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.472 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.472 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.472 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.472 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.472 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.731 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.731 "name": "raid_bdev1", 00:16:50.731 "uuid": "55ae6845-5e07-4ade-ac1e-83bca2c0d743", 00:16:50.731 "strip_size_kb": 0, 00:16:50.731 "state": "online", 00:16:50.731 "raid_level": "raid1", 00:16:50.731 "superblock": true, 00:16:50.731 "num_base_bdevs": 2, 00:16:50.731 "num_base_bdevs_discovered": 1, 00:16:50.731 "num_base_bdevs_operational": 1, 00:16:50.731 "base_bdevs_list": [ 00:16:50.731 { 00:16:50.731 "name": null, 00:16:50.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.731 "is_configured": false, 00:16:50.731 "data_offset": 0, 00:16:50.731 "data_size": 7936 00:16:50.731 }, 00:16:50.731 { 00:16:50.731 "name": "pt2", 00:16:50.731 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.731 "is_configured": true, 00:16:50.731 "data_offset": 256, 00:16:50.731 "data_size": 7936 00:16:50.731 } 00:16:50.731 ] 00:16:50.731 }' 00:16:50.731 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.731 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.992 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:50.992 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.992 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.992 [2024-11-18 03:16:54.428860] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:50.992 [2024-11-18 03:16:54.428939] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:50.992 [2024-11-18 03:16:54.429072] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:50.992 [2024-11-18 03:16:54.429143] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:50.992 [2024-11-18 03:16:54.429217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:16:50.992 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.992 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.992 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.992 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.992 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:50.992 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.992 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:50.992 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:50.992 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:50.992 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:50.992 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:50.992 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.992 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.992 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.992 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:50.992 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:50.992 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:50.992 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:50.992 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:16:50.992 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:50.992 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.992 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.992 [2024-11-18 03:16:54.500728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:50.992 [2024-11-18 03:16:54.500832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.992 [2024-11-18 03:16:54.500870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:50.992 [2024-11-18 03:16:54.500898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.992 [2024-11-18 03:16:54.502894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.992 [2024-11-18 03:16:54.502973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:50.992 [2024-11-18 03:16:54.503057] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:50.993 [2024-11-18 03:16:54.503093] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:50.993 [2024-11-18 03:16:54.503155] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:16:50.993 [2024-11-18 03:16:54.503163] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:50.993 [2024-11-18 03:16:54.503258] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:50.993 [2024-11-18 03:16:54.503319] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:16:50.993 [2024-11-18 03:16:54.503328] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:16:50.993 [2024-11-18 03:16:54.503387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.993 pt2 00:16:50.993 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.993 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:50.993 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.993 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.993 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.993 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.993 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:50.993 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.993 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.993 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.993 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.993 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.993 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.993 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.993 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.993 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.993 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.993 "name": "raid_bdev1", 00:16:50.993 "uuid": "55ae6845-5e07-4ade-ac1e-83bca2c0d743", 00:16:50.993 "strip_size_kb": 0, 00:16:50.993 "state": "online", 00:16:50.993 "raid_level": "raid1", 00:16:50.993 "superblock": true, 00:16:50.993 "num_base_bdevs": 2, 00:16:50.993 "num_base_bdevs_discovered": 1, 00:16:50.993 "num_base_bdevs_operational": 1, 00:16:50.993 "base_bdevs_list": [ 00:16:50.993 { 00:16:50.993 "name": null, 00:16:50.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.993 "is_configured": false, 00:16:50.993 "data_offset": 256, 00:16:50.993 "data_size": 7936 00:16:50.993 }, 00:16:50.993 { 00:16:50.993 "name": "pt2", 00:16:50.993 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.993 "is_configured": true, 00:16:50.993 "data_offset": 256, 00:16:50.993 "data_size": 7936 00:16:50.993 } 00:16:50.993 ] 00:16:50.993 }' 00:16:50.993 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.993 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.563 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:51.563 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.563 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.563 [2024-11-18 03:16:54.932053] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:51.563 [2024-11-18 03:16:54.932134] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:51.563 [2024-11-18 03:16:54.932236] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.563 [2024-11-18 03:16:54.932303] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:51.563 [2024-11-18 03:16:54.932352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:16:51.563 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.563 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:51.563 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.563 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.563 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.563 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.563 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:51.563 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:51.563 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:51.563 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:51.563 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.563 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.564 [2024-11-18 03:16:54.995908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:51.564 [2024-11-18 03:16:54.996028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.564 [2024-11-18 03:16:54.996074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:51.564 [2024-11-18 03:16:54.996111] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.564 [2024-11-18 03:16:54.998095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.564 [2024-11-18 03:16:54.998166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:51.564 [2024-11-18 03:16:54.998243] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:51.564 [2024-11-18 03:16:54.998310] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:51.564 [2024-11-18 03:16:54.998428] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:51.564 [2024-11-18 03:16:54.998484] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:51.564 [2024-11-18 03:16:54.998524] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:16:51.564 [2024-11-18 03:16:54.998608] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:51.564 [2024-11-18 03:16:54.998714] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:51.564 [2024-11-18 03:16:54.998757] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:51.564 [2024-11-18 03:16:54.998850] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:51.564 [2024-11-18 03:16:54.998940] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:51.564 [2024-11-18 03:16:54.998990] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:51.564 [2024-11-18 03:16:54.999098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.564 pt1 00:16:51.564 03:16:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.564 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:51.564 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:51.564 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.564 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.564 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:51.564 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:51.564 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:51.564 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.564 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.564 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.564 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.564 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.564 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.564 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.564 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.564 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.564 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.564 "name": "raid_bdev1", 00:16:51.564 "uuid": "55ae6845-5e07-4ade-ac1e-83bca2c0d743", 00:16:51.564 "strip_size_kb": 0, 00:16:51.564 "state": "online", 00:16:51.564 "raid_level": "raid1", 00:16:51.564 "superblock": true, 00:16:51.564 "num_base_bdevs": 2, 00:16:51.564 "num_base_bdevs_discovered": 1, 00:16:51.564 "num_base_bdevs_operational": 1, 00:16:51.564 "base_bdevs_list": [ 00:16:51.564 { 00:16:51.564 "name": null, 00:16:51.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.564 "is_configured": false, 00:16:51.564 "data_offset": 256, 00:16:51.564 "data_size": 7936 00:16:51.564 }, 00:16:51.564 { 00:16:51.564 "name": "pt2", 00:16:51.564 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:51.564 "is_configured": true, 00:16:51.564 "data_offset": 256, 00:16:51.564 "data_size": 7936 00:16:51.564 } 00:16:51.564 ] 00:16:51.564 }' 00:16:51.564 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.564 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:52.133 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:52.133 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:52.133 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.133 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:52.133 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.133 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:52.133 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:52.133 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:52.133 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.133 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:52.134 [2024-11-18 03:16:55.495338] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:52.134 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.134 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 55ae6845-5e07-4ade-ac1e-83bca2c0d743 '!=' 55ae6845-5e07-4ade-ac1e-83bca2c0d743 ']' 00:16:52.134 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 99122 00:16:52.134 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99122 ']' 00:16:52.134 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99122 00:16:52.134 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:16:52.134 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:52.134 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99122 00:16:52.134 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:52.134 killing process with pid 99122 00:16:52.134 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:52.134 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99122' 00:16:52.134 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 99122 00:16:52.134 [2024-11-18 03:16:55.578531] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:52.134 [2024-11-18 03:16:55.578625] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:52.134 [2024-11-18 03:16:55.578677] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:52.134 [2024-11-18 03:16:55.578686] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:52.134 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 99122 00:16:52.134 [2024-11-18 03:16:55.602703] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:52.394 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:16:52.394 ************************************ 00:16:52.394 END TEST raid_superblock_test_md_interleaved 00:16:52.394 ************************************ 00:16:52.394 00:16:52.394 real 0m4.969s 00:16:52.394 user 0m8.089s 00:16:52.394 sys 0m1.082s 00:16:52.394 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:52.394 03:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:52.394 03:16:55 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:16:52.394 03:16:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:52.394 03:16:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:52.394 03:16:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:52.394 ************************************ 00:16:52.394 START TEST raid_rebuild_test_sb_md_interleaved 00:16:52.394 ************************************ 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=99440 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 99440 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99440 ']' 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:52.394 03:16:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:52.654 [2024-11-18 03:16:56.013749] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:52.654 [2024-11-18 03:16:56.013951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:52.654 Zero copy mechanism will not be used. 00:16:52.654 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99440 ] 00:16:52.654 [2024-11-18 03:16:56.164388] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.654 [2024-11-18 03:16:56.213920] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.970 [2024-11-18 03:16:56.257399] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:52.970 [2024-11-18 03:16:56.257515] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:53.540 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:53.540 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:53.540 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:53.540 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:16:53.540 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.540 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.540 BaseBdev1_malloc 00:16:53.540 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.540 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:53.540 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.540 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.540 [2024-11-18 03:16:56.868083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:53.540 [2024-11-18 03:16:56.868142] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.540 [2024-11-18 03:16:56.868170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:53.540 [2024-11-18 03:16:56.868179] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.540 [2024-11-18 03:16:56.870107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.540 [2024-11-18 03:16:56.870143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:53.540 BaseBdev1 00:16:53.540 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.540 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:53.540 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:16:53.540 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.540 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.540 BaseBdev2_malloc 00:16:53.540 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.540 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:53.540 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.541 [2024-11-18 03:16:56.905997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:53.541 [2024-11-18 03:16:56.906132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.541 [2024-11-18 03:16:56.906181] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:53.541 [2024-11-18 03:16:56.906237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.541 [2024-11-18 03:16:56.908502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.541 [2024-11-18 03:16:56.908588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:53.541 BaseBdev2 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.541 spare_malloc 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.541 spare_delay 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.541 [2024-11-18 03:16:56.946824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:53.541 [2024-11-18 03:16:56.946951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.541 [2024-11-18 03:16:56.947007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:53.541 [2024-11-18 03:16:56.947046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.541 [2024-11-18 03:16:56.948933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.541 [2024-11-18 03:16:56.949019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:53.541 spare 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.541 [2024-11-18 03:16:56.958838] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:53.541 [2024-11-18 03:16:56.960737] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:53.541 [2024-11-18 03:16:56.960973] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:53.541 [2024-11-18 03:16:56.961022] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:53.541 [2024-11-18 03:16:56.961136] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:53.541 [2024-11-18 03:16:56.961241] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:53.541 [2024-11-18 03:16:56.961281] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:53.541 [2024-11-18 03:16:56.961402] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.541 03:16:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.541 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.541 "name": "raid_bdev1", 00:16:53.541 "uuid": "e79bae18-2feb-41b4-9168-0dd39cc2ab2b", 00:16:53.541 "strip_size_kb": 0, 00:16:53.541 "state": "online", 00:16:53.541 "raid_level": "raid1", 00:16:53.541 "superblock": true, 00:16:53.541 "num_base_bdevs": 2, 00:16:53.541 "num_base_bdevs_discovered": 2, 00:16:53.541 "num_base_bdevs_operational": 2, 00:16:53.541 "base_bdevs_list": [ 00:16:53.541 { 00:16:53.541 "name": "BaseBdev1", 00:16:53.541 "uuid": "a51976a4-1f4d-558e-b33f-645e396ea9f9", 00:16:53.541 "is_configured": true, 00:16:53.541 "data_offset": 256, 00:16:53.541 "data_size": 7936 00:16:53.541 }, 00:16:53.541 { 00:16:53.541 "name": "BaseBdev2", 00:16:53.541 "uuid": "78a6f61b-1312-5d5d-94d9-c0f122ac82ea", 00:16:53.541 "is_configured": true, 00:16:53.541 "data_offset": 256, 00:16:53.541 "data_size": 7936 00:16:53.541 } 00:16:53.541 ] 00:16:53.541 }' 00:16:53.541 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.541 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:54.111 [2024-11-18 03:16:57.382410] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.111 [2024-11-18 03:16:57.481926] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.111 "name": "raid_bdev1", 00:16:54.111 "uuid": "e79bae18-2feb-41b4-9168-0dd39cc2ab2b", 00:16:54.111 "strip_size_kb": 0, 00:16:54.111 "state": "online", 00:16:54.111 "raid_level": "raid1", 00:16:54.111 "superblock": true, 00:16:54.111 "num_base_bdevs": 2, 00:16:54.111 "num_base_bdevs_discovered": 1, 00:16:54.111 "num_base_bdevs_operational": 1, 00:16:54.111 "base_bdevs_list": [ 00:16:54.111 { 00:16:54.111 "name": null, 00:16:54.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.111 "is_configured": false, 00:16:54.111 "data_offset": 0, 00:16:54.111 "data_size": 7936 00:16:54.111 }, 00:16:54.111 { 00:16:54.111 "name": "BaseBdev2", 00:16:54.111 "uuid": "78a6f61b-1312-5d5d-94d9-c0f122ac82ea", 00:16:54.111 "is_configured": true, 00:16:54.111 "data_offset": 256, 00:16:54.111 "data_size": 7936 00:16:54.111 } 00:16:54.111 ] 00:16:54.111 }' 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.111 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.371 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:54.371 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.371 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.371 [2024-11-18 03:16:57.873271] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:54.371 [2024-11-18 03:16:57.876340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:54.371 [2024-11-18 03:16:57.878344] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:54.371 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.371 03:16:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:55.751 03:16:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.751 03:16:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.751 03:16:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.751 03:16:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.751 03:16:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.751 03:16:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.751 03:16:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.751 03:16:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.751 03:16:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.751 03:16:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.751 03:16:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.751 "name": "raid_bdev1", 00:16:55.751 "uuid": "e79bae18-2feb-41b4-9168-0dd39cc2ab2b", 00:16:55.751 "strip_size_kb": 0, 00:16:55.751 "state": "online", 00:16:55.751 "raid_level": "raid1", 00:16:55.751 "superblock": true, 00:16:55.751 "num_base_bdevs": 2, 00:16:55.751 "num_base_bdevs_discovered": 2, 00:16:55.751 "num_base_bdevs_operational": 2, 00:16:55.751 "process": { 00:16:55.751 "type": "rebuild", 00:16:55.751 "target": "spare", 00:16:55.751 "progress": { 00:16:55.751 "blocks": 2560, 00:16:55.751 "percent": 32 00:16:55.751 } 00:16:55.751 }, 00:16:55.751 "base_bdevs_list": [ 00:16:55.751 { 00:16:55.751 "name": "spare", 00:16:55.751 "uuid": "26921951-7eb2-5e2e-8fc3-e53c34435209", 00:16:55.751 "is_configured": true, 00:16:55.751 "data_offset": 256, 00:16:55.751 "data_size": 7936 00:16:55.751 }, 00:16:55.751 { 00:16:55.751 "name": "BaseBdev2", 00:16:55.751 "uuid": "78a6f61b-1312-5d5d-94d9-c0f122ac82ea", 00:16:55.751 "is_configured": true, 00:16:55.751 "data_offset": 256, 00:16:55.751 "data_size": 7936 00:16:55.751 } 00:16:55.751 ] 00:16:55.751 }' 00:16:55.751 03:16:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.751 03:16:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.751 03:16:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.751 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.751 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:55.751 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.751 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.751 [2024-11-18 03:16:59.021057] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:55.751 [2024-11-18 03:16:59.083944] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:55.751 [2024-11-18 03:16:59.084082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.751 [2024-11-18 03:16:59.084120] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:55.751 [2024-11-18 03:16:59.084142] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:55.751 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.751 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:55.751 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.751 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.751 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.752 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.752 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:55.752 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.752 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.752 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.752 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.752 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.752 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.752 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.752 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.752 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.752 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.752 "name": "raid_bdev1", 00:16:55.752 "uuid": "e79bae18-2feb-41b4-9168-0dd39cc2ab2b", 00:16:55.752 "strip_size_kb": 0, 00:16:55.752 "state": "online", 00:16:55.752 "raid_level": "raid1", 00:16:55.752 "superblock": true, 00:16:55.752 "num_base_bdevs": 2, 00:16:55.752 "num_base_bdevs_discovered": 1, 00:16:55.752 "num_base_bdevs_operational": 1, 00:16:55.752 "base_bdevs_list": [ 00:16:55.752 { 00:16:55.752 "name": null, 00:16:55.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.752 "is_configured": false, 00:16:55.752 "data_offset": 0, 00:16:55.752 "data_size": 7936 00:16:55.752 }, 00:16:55.752 { 00:16:55.752 "name": "BaseBdev2", 00:16:55.752 "uuid": "78a6f61b-1312-5d5d-94d9-c0f122ac82ea", 00:16:55.752 "is_configured": true, 00:16:55.752 "data_offset": 256, 00:16:55.752 "data_size": 7936 00:16:55.752 } 00:16:55.752 ] 00:16:55.752 }' 00:16:55.752 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.752 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.018 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:56.018 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.018 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:56.018 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:56.018 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.018 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.018 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.018 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.018 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.018 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.018 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.018 "name": "raid_bdev1", 00:16:56.018 "uuid": "e79bae18-2feb-41b4-9168-0dd39cc2ab2b", 00:16:56.018 "strip_size_kb": 0, 00:16:56.018 "state": "online", 00:16:56.018 "raid_level": "raid1", 00:16:56.018 "superblock": true, 00:16:56.018 "num_base_bdevs": 2, 00:16:56.018 "num_base_bdevs_discovered": 1, 00:16:56.018 "num_base_bdevs_operational": 1, 00:16:56.018 "base_bdevs_list": [ 00:16:56.018 { 00:16:56.018 "name": null, 00:16:56.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.018 "is_configured": false, 00:16:56.018 "data_offset": 0, 00:16:56.018 "data_size": 7936 00:16:56.018 }, 00:16:56.019 { 00:16:56.019 "name": "BaseBdev2", 00:16:56.019 "uuid": "78a6f61b-1312-5d5d-94d9-c0f122ac82ea", 00:16:56.019 "is_configured": true, 00:16:56.019 "data_offset": 256, 00:16:56.019 "data_size": 7936 00:16:56.019 } 00:16:56.019 ] 00:16:56.019 }' 00:16:56.019 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.293 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:56.293 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.293 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:56.293 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:56.293 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.293 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.293 [2024-11-18 03:16:59.687241] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:56.293 [2024-11-18 03:16:59.690290] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:56.293 [2024-11-18 03:16:59.692246] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:56.293 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.293 03:16:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:57.249 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.249 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.249 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.249 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.249 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.249 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.249 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.249 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.249 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.249 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.249 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.249 "name": "raid_bdev1", 00:16:57.249 "uuid": "e79bae18-2feb-41b4-9168-0dd39cc2ab2b", 00:16:57.249 "strip_size_kb": 0, 00:16:57.249 "state": "online", 00:16:57.249 "raid_level": "raid1", 00:16:57.249 "superblock": true, 00:16:57.249 "num_base_bdevs": 2, 00:16:57.249 "num_base_bdevs_discovered": 2, 00:16:57.249 "num_base_bdevs_operational": 2, 00:16:57.249 "process": { 00:16:57.249 "type": "rebuild", 00:16:57.249 "target": "spare", 00:16:57.249 "progress": { 00:16:57.249 "blocks": 2560, 00:16:57.249 "percent": 32 00:16:57.249 } 00:16:57.249 }, 00:16:57.249 "base_bdevs_list": [ 00:16:57.249 { 00:16:57.249 "name": "spare", 00:16:57.249 "uuid": "26921951-7eb2-5e2e-8fc3-e53c34435209", 00:16:57.249 "is_configured": true, 00:16:57.249 "data_offset": 256, 00:16:57.249 "data_size": 7936 00:16:57.249 }, 00:16:57.249 { 00:16:57.249 "name": "BaseBdev2", 00:16:57.249 "uuid": "78a6f61b-1312-5d5d-94d9-c0f122ac82ea", 00:16:57.249 "is_configured": true, 00:16:57.249 "data_offset": 256, 00:16:57.249 "data_size": 7936 00:16:57.249 } 00:16:57.249 ] 00:16:57.249 }' 00:16:57.249 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.249 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.249 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.509 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.509 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:57.509 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:57.509 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:57.509 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:57.509 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:57.509 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:57.509 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=614 00:16:57.509 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:57.509 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.509 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.509 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.509 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.509 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.509 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.509 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.509 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.509 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.509 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.509 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.509 "name": "raid_bdev1", 00:16:57.509 "uuid": "e79bae18-2feb-41b4-9168-0dd39cc2ab2b", 00:16:57.509 "strip_size_kb": 0, 00:16:57.509 "state": "online", 00:16:57.509 "raid_level": "raid1", 00:16:57.509 "superblock": true, 00:16:57.509 "num_base_bdevs": 2, 00:16:57.509 "num_base_bdevs_discovered": 2, 00:16:57.509 "num_base_bdevs_operational": 2, 00:16:57.509 "process": { 00:16:57.509 "type": "rebuild", 00:16:57.509 "target": "spare", 00:16:57.509 "progress": { 00:16:57.509 "blocks": 2816, 00:16:57.509 "percent": 35 00:16:57.509 } 00:16:57.509 }, 00:16:57.509 "base_bdevs_list": [ 00:16:57.509 { 00:16:57.509 "name": "spare", 00:16:57.509 "uuid": "26921951-7eb2-5e2e-8fc3-e53c34435209", 00:16:57.509 "is_configured": true, 00:16:57.509 "data_offset": 256, 00:16:57.509 "data_size": 7936 00:16:57.509 }, 00:16:57.509 { 00:16:57.509 "name": "BaseBdev2", 00:16:57.509 "uuid": "78a6f61b-1312-5d5d-94d9-c0f122ac82ea", 00:16:57.509 "is_configured": true, 00:16:57.509 "data_offset": 256, 00:16:57.509 "data_size": 7936 00:16:57.509 } 00:16:57.509 ] 00:16:57.509 }' 00:16:57.509 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.509 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.509 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.509 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.509 03:17:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:58.447 03:17:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:58.448 03:17:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.448 03:17:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.448 03:17:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.448 03:17:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.448 03:17:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.448 03:17:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.448 03:17:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.448 03:17:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.448 03:17:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.448 03:17:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.448 03:17:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.448 "name": "raid_bdev1", 00:16:58.448 "uuid": "e79bae18-2feb-41b4-9168-0dd39cc2ab2b", 00:16:58.448 "strip_size_kb": 0, 00:16:58.448 "state": "online", 00:16:58.448 "raid_level": "raid1", 00:16:58.448 "superblock": true, 00:16:58.448 "num_base_bdevs": 2, 00:16:58.448 "num_base_bdevs_discovered": 2, 00:16:58.448 "num_base_bdevs_operational": 2, 00:16:58.448 "process": { 00:16:58.448 "type": "rebuild", 00:16:58.448 "target": "spare", 00:16:58.448 "progress": { 00:16:58.448 "blocks": 5632, 00:16:58.448 "percent": 70 00:16:58.448 } 00:16:58.448 }, 00:16:58.448 "base_bdevs_list": [ 00:16:58.448 { 00:16:58.448 "name": "spare", 00:16:58.448 "uuid": "26921951-7eb2-5e2e-8fc3-e53c34435209", 00:16:58.448 "is_configured": true, 00:16:58.448 "data_offset": 256, 00:16:58.448 "data_size": 7936 00:16:58.448 }, 00:16:58.448 { 00:16:58.448 "name": "BaseBdev2", 00:16:58.448 "uuid": "78a6f61b-1312-5d5d-94d9-c0f122ac82ea", 00:16:58.448 "is_configured": true, 00:16:58.448 "data_offset": 256, 00:16:58.448 "data_size": 7936 00:16:58.448 } 00:16:58.448 ] 00:16:58.448 }' 00:16:58.448 03:17:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.708 03:17:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:58.708 03:17:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.708 03:17:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:58.708 03:17:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:59.277 [2024-11-18 03:17:02.805105] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:59.277 [2024-11-18 03:17:02.805281] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:59.277 [2024-11-18 03:17:02.805416] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.847 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:59.847 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.847 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.847 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.847 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.847 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.847 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.848 "name": "raid_bdev1", 00:16:59.848 "uuid": "e79bae18-2feb-41b4-9168-0dd39cc2ab2b", 00:16:59.848 "strip_size_kb": 0, 00:16:59.848 "state": "online", 00:16:59.848 "raid_level": "raid1", 00:16:59.848 "superblock": true, 00:16:59.848 "num_base_bdevs": 2, 00:16:59.848 "num_base_bdevs_discovered": 2, 00:16:59.848 "num_base_bdevs_operational": 2, 00:16:59.848 "base_bdevs_list": [ 00:16:59.848 { 00:16:59.848 "name": "spare", 00:16:59.848 "uuid": "26921951-7eb2-5e2e-8fc3-e53c34435209", 00:16:59.848 "is_configured": true, 00:16:59.848 "data_offset": 256, 00:16:59.848 "data_size": 7936 00:16:59.848 }, 00:16:59.848 { 00:16:59.848 "name": "BaseBdev2", 00:16:59.848 "uuid": "78a6f61b-1312-5d5d-94d9-c0f122ac82ea", 00:16:59.848 "is_configured": true, 00:16:59.848 "data_offset": 256, 00:16:59.848 "data_size": 7936 00:16:59.848 } 00:16:59.848 ] 00:16:59.848 }' 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.848 "name": "raid_bdev1", 00:16:59.848 "uuid": "e79bae18-2feb-41b4-9168-0dd39cc2ab2b", 00:16:59.848 "strip_size_kb": 0, 00:16:59.848 "state": "online", 00:16:59.848 "raid_level": "raid1", 00:16:59.848 "superblock": true, 00:16:59.848 "num_base_bdevs": 2, 00:16:59.848 "num_base_bdevs_discovered": 2, 00:16:59.848 "num_base_bdevs_operational": 2, 00:16:59.848 "base_bdevs_list": [ 00:16:59.848 { 00:16:59.848 "name": "spare", 00:16:59.848 "uuid": "26921951-7eb2-5e2e-8fc3-e53c34435209", 00:16:59.848 "is_configured": true, 00:16:59.848 "data_offset": 256, 00:16:59.848 "data_size": 7936 00:16:59.848 }, 00:16:59.848 { 00:16:59.848 "name": "BaseBdev2", 00:16:59.848 "uuid": "78a6f61b-1312-5d5d-94d9-c0f122ac82ea", 00:16:59.848 "is_configured": true, 00:16:59.848 "data_offset": 256, 00:16:59.848 "data_size": 7936 00:16:59.848 } 00:16:59.848 ] 00:16:59.848 }' 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.848 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.108 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.108 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.108 "name": "raid_bdev1", 00:17:00.108 "uuid": "e79bae18-2feb-41b4-9168-0dd39cc2ab2b", 00:17:00.108 "strip_size_kb": 0, 00:17:00.108 "state": "online", 00:17:00.108 "raid_level": "raid1", 00:17:00.108 "superblock": true, 00:17:00.108 "num_base_bdevs": 2, 00:17:00.108 "num_base_bdevs_discovered": 2, 00:17:00.108 "num_base_bdevs_operational": 2, 00:17:00.108 "base_bdevs_list": [ 00:17:00.108 { 00:17:00.108 "name": "spare", 00:17:00.108 "uuid": "26921951-7eb2-5e2e-8fc3-e53c34435209", 00:17:00.108 "is_configured": true, 00:17:00.108 "data_offset": 256, 00:17:00.108 "data_size": 7936 00:17:00.108 }, 00:17:00.108 { 00:17:00.108 "name": "BaseBdev2", 00:17:00.108 "uuid": "78a6f61b-1312-5d5d-94d9-c0f122ac82ea", 00:17:00.108 "is_configured": true, 00:17:00.108 "data_offset": 256, 00:17:00.108 "data_size": 7936 00:17:00.108 } 00:17:00.108 ] 00:17:00.108 }' 00:17:00.108 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.108 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.369 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:00.369 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.369 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.369 [2024-11-18 03:17:03.847493] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:00.369 [2024-11-18 03:17:03.847525] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:00.369 [2024-11-18 03:17:03.847639] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.369 [2024-11-18 03:17:03.847711] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.369 [2024-11-18 03:17:03.847724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:17:00.369 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.369 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.369 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.369 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.369 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:17:00.369 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.369 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:00.369 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:17:00.369 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:00.369 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:00.369 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.369 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.369 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.369 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:00.369 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.369 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.369 [2024-11-18 03:17:03.919350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:00.369 [2024-11-18 03:17:03.919468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.369 [2024-11-18 03:17:03.919525] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:00.369 [2024-11-18 03:17:03.919564] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.369 [2024-11-18 03:17:03.921691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.369 [2024-11-18 03:17:03.921770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:00.369 [2024-11-18 03:17:03.921881] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:00.369 [2024-11-18 03:17:03.921983] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:00.369 [2024-11-18 03:17:03.922131] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:00.369 spare 00:17:00.369 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.369 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:00.369 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.369 03:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.629 [2024-11-18 03:17:04.022089] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:17:00.629 [2024-11-18 03:17:04.022198] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:00.629 [2024-11-18 03:17:04.022334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:00.629 [2024-11-18 03:17:04.022501] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:17:00.629 [2024-11-18 03:17:04.022546] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:17:00.629 [2024-11-18 03:17:04.022672] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.629 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.629 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:00.629 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.629 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.629 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:00.629 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:00.629 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:00.629 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.629 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.629 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.629 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.629 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.629 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.629 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.629 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.629 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.629 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.629 "name": "raid_bdev1", 00:17:00.629 "uuid": "e79bae18-2feb-41b4-9168-0dd39cc2ab2b", 00:17:00.629 "strip_size_kb": 0, 00:17:00.629 "state": "online", 00:17:00.629 "raid_level": "raid1", 00:17:00.629 "superblock": true, 00:17:00.629 "num_base_bdevs": 2, 00:17:00.629 "num_base_bdevs_discovered": 2, 00:17:00.629 "num_base_bdevs_operational": 2, 00:17:00.629 "base_bdevs_list": [ 00:17:00.629 { 00:17:00.629 "name": "spare", 00:17:00.629 "uuid": "26921951-7eb2-5e2e-8fc3-e53c34435209", 00:17:00.629 "is_configured": true, 00:17:00.629 "data_offset": 256, 00:17:00.629 "data_size": 7936 00:17:00.629 }, 00:17:00.629 { 00:17:00.629 "name": "BaseBdev2", 00:17:00.629 "uuid": "78a6f61b-1312-5d5d-94d9-c0f122ac82ea", 00:17:00.629 "is_configured": true, 00:17:00.629 "data_offset": 256, 00:17:00.629 "data_size": 7936 00:17:00.629 } 00:17:00.629 ] 00:17:00.629 }' 00:17:00.629 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.629 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.199 "name": "raid_bdev1", 00:17:01.199 "uuid": "e79bae18-2feb-41b4-9168-0dd39cc2ab2b", 00:17:01.199 "strip_size_kb": 0, 00:17:01.199 "state": "online", 00:17:01.199 "raid_level": "raid1", 00:17:01.199 "superblock": true, 00:17:01.199 "num_base_bdevs": 2, 00:17:01.199 "num_base_bdevs_discovered": 2, 00:17:01.199 "num_base_bdevs_operational": 2, 00:17:01.199 "base_bdevs_list": [ 00:17:01.199 { 00:17:01.199 "name": "spare", 00:17:01.199 "uuid": "26921951-7eb2-5e2e-8fc3-e53c34435209", 00:17:01.199 "is_configured": true, 00:17:01.199 "data_offset": 256, 00:17:01.199 "data_size": 7936 00:17:01.199 }, 00:17:01.199 { 00:17:01.199 "name": "BaseBdev2", 00:17:01.199 "uuid": "78a6f61b-1312-5d5d-94d9-c0f122ac82ea", 00:17:01.199 "is_configured": true, 00:17:01.199 "data_offset": 256, 00:17:01.199 "data_size": 7936 00:17:01.199 } 00:17:01.199 ] 00:17:01.199 }' 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.199 [2024-11-18 03:17:04.662298] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.199 "name": "raid_bdev1", 00:17:01.199 "uuid": "e79bae18-2feb-41b4-9168-0dd39cc2ab2b", 00:17:01.199 "strip_size_kb": 0, 00:17:01.199 "state": "online", 00:17:01.199 "raid_level": "raid1", 00:17:01.199 "superblock": true, 00:17:01.199 "num_base_bdevs": 2, 00:17:01.199 "num_base_bdevs_discovered": 1, 00:17:01.199 "num_base_bdevs_operational": 1, 00:17:01.199 "base_bdevs_list": [ 00:17:01.199 { 00:17:01.199 "name": null, 00:17:01.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.199 "is_configured": false, 00:17:01.199 "data_offset": 0, 00:17:01.199 "data_size": 7936 00:17:01.199 }, 00:17:01.199 { 00:17:01.199 "name": "BaseBdev2", 00:17:01.199 "uuid": "78a6f61b-1312-5d5d-94d9-c0f122ac82ea", 00:17:01.199 "is_configured": true, 00:17:01.199 "data_offset": 256, 00:17:01.199 "data_size": 7936 00:17:01.199 } 00:17:01.199 ] 00:17:01.199 }' 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.199 03:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.769 03:17:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:01.769 03:17:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.769 03:17:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.769 [2024-11-18 03:17:05.089586] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:01.769 [2024-11-18 03:17:05.089811] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:01.769 [2024-11-18 03:17:05.089837] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:01.769 [2024-11-18 03:17:05.089877] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:01.769 [2024-11-18 03:17:05.092796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:01.769 [2024-11-18 03:17:05.094889] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:01.769 03:17:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.769 03:17:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:02.707 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:02.707 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.707 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:02.707 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:02.707 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.707 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.707 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.707 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.707 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.707 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.707 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.707 "name": "raid_bdev1", 00:17:02.707 "uuid": "e79bae18-2feb-41b4-9168-0dd39cc2ab2b", 00:17:02.707 "strip_size_kb": 0, 00:17:02.707 "state": "online", 00:17:02.707 "raid_level": "raid1", 00:17:02.707 "superblock": true, 00:17:02.707 "num_base_bdevs": 2, 00:17:02.707 "num_base_bdevs_discovered": 2, 00:17:02.707 "num_base_bdevs_operational": 2, 00:17:02.707 "process": { 00:17:02.707 "type": "rebuild", 00:17:02.707 "target": "spare", 00:17:02.707 "progress": { 00:17:02.707 "blocks": 2560, 00:17:02.708 "percent": 32 00:17:02.708 } 00:17:02.708 }, 00:17:02.708 "base_bdevs_list": [ 00:17:02.708 { 00:17:02.708 "name": "spare", 00:17:02.708 "uuid": "26921951-7eb2-5e2e-8fc3-e53c34435209", 00:17:02.708 "is_configured": true, 00:17:02.708 "data_offset": 256, 00:17:02.708 "data_size": 7936 00:17:02.708 }, 00:17:02.708 { 00:17:02.708 "name": "BaseBdev2", 00:17:02.708 "uuid": "78a6f61b-1312-5d5d-94d9-c0f122ac82ea", 00:17:02.708 "is_configured": true, 00:17:02.708 "data_offset": 256, 00:17:02.708 "data_size": 7936 00:17:02.708 } 00:17:02.708 ] 00:17:02.708 }' 00:17:02.708 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.708 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:02.708 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.708 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:02.708 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:02.708 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.708 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.708 [2024-11-18 03:17:06.249730] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:02.968 [2024-11-18 03:17:06.299675] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:02.968 [2024-11-18 03:17:06.299745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.968 [2024-11-18 03:17:06.299762] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:02.968 [2024-11-18 03:17:06.299769] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:02.968 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.968 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:02.968 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.968 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.968 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.968 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.968 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:02.968 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.968 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.968 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.968 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.968 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.968 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.968 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.968 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.968 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.968 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.968 "name": "raid_bdev1", 00:17:02.968 "uuid": "e79bae18-2feb-41b4-9168-0dd39cc2ab2b", 00:17:02.968 "strip_size_kb": 0, 00:17:02.968 "state": "online", 00:17:02.968 "raid_level": "raid1", 00:17:02.968 "superblock": true, 00:17:02.968 "num_base_bdevs": 2, 00:17:02.968 "num_base_bdevs_discovered": 1, 00:17:02.968 "num_base_bdevs_operational": 1, 00:17:02.968 "base_bdevs_list": [ 00:17:02.968 { 00:17:02.968 "name": null, 00:17:02.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.968 "is_configured": false, 00:17:02.968 "data_offset": 0, 00:17:02.968 "data_size": 7936 00:17:02.968 }, 00:17:02.968 { 00:17:02.968 "name": "BaseBdev2", 00:17:02.968 "uuid": "78a6f61b-1312-5d5d-94d9-c0f122ac82ea", 00:17:02.968 "is_configured": true, 00:17:02.968 "data_offset": 256, 00:17:02.968 "data_size": 7936 00:17:02.968 } 00:17:02.968 ] 00:17:02.968 }' 00:17:02.968 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.968 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.228 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:03.228 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.228 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.228 [2024-11-18 03:17:06.762762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:03.228 [2024-11-18 03:17:06.762831] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.228 [2024-11-18 03:17:06.762881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:03.228 [2024-11-18 03:17:06.762897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.228 [2024-11-18 03:17:06.763133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.228 [2024-11-18 03:17:06.763155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:03.228 [2024-11-18 03:17:06.763224] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:03.228 [2024-11-18 03:17:06.763239] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:03.228 [2024-11-18 03:17:06.763251] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:03.228 [2024-11-18 03:17:06.763275] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:03.228 [2024-11-18 03:17:06.766169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:03.228 [2024-11-18 03:17:06.768253] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:03.228 spare 00:17:03.228 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.228 03:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.609 "name": "raid_bdev1", 00:17:04.609 "uuid": "e79bae18-2feb-41b4-9168-0dd39cc2ab2b", 00:17:04.609 "strip_size_kb": 0, 00:17:04.609 "state": "online", 00:17:04.609 "raid_level": "raid1", 00:17:04.609 "superblock": true, 00:17:04.609 "num_base_bdevs": 2, 00:17:04.609 "num_base_bdevs_discovered": 2, 00:17:04.609 "num_base_bdevs_operational": 2, 00:17:04.609 "process": { 00:17:04.609 "type": "rebuild", 00:17:04.609 "target": "spare", 00:17:04.609 "progress": { 00:17:04.609 "blocks": 2560, 00:17:04.609 "percent": 32 00:17:04.609 } 00:17:04.609 }, 00:17:04.609 "base_bdevs_list": [ 00:17:04.609 { 00:17:04.609 "name": "spare", 00:17:04.609 "uuid": "26921951-7eb2-5e2e-8fc3-e53c34435209", 00:17:04.609 "is_configured": true, 00:17:04.609 "data_offset": 256, 00:17:04.609 "data_size": 7936 00:17:04.609 }, 00:17:04.609 { 00:17:04.609 "name": "BaseBdev2", 00:17:04.609 "uuid": "78a6f61b-1312-5d5d-94d9-c0f122ac82ea", 00:17:04.609 "is_configured": true, 00:17:04.609 "data_offset": 256, 00:17:04.609 "data_size": 7936 00:17:04.609 } 00:17:04.609 ] 00:17:04.609 }' 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.609 [2024-11-18 03:17:07.931089] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:04.609 [2024-11-18 03:17:07.972947] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:04.609 [2024-11-18 03:17:07.973032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.609 [2024-11-18 03:17:07.973048] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:04.609 [2024-11-18 03:17:07.973057] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.609 03:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.609 03:17:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.609 03:17:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.609 "name": "raid_bdev1", 00:17:04.609 "uuid": "e79bae18-2feb-41b4-9168-0dd39cc2ab2b", 00:17:04.609 "strip_size_kb": 0, 00:17:04.609 "state": "online", 00:17:04.609 "raid_level": "raid1", 00:17:04.609 "superblock": true, 00:17:04.609 "num_base_bdevs": 2, 00:17:04.609 "num_base_bdevs_discovered": 1, 00:17:04.609 "num_base_bdevs_operational": 1, 00:17:04.609 "base_bdevs_list": [ 00:17:04.609 { 00:17:04.609 "name": null, 00:17:04.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.609 "is_configured": false, 00:17:04.609 "data_offset": 0, 00:17:04.609 "data_size": 7936 00:17:04.609 }, 00:17:04.609 { 00:17:04.609 "name": "BaseBdev2", 00:17:04.609 "uuid": "78a6f61b-1312-5d5d-94d9-c0f122ac82ea", 00:17:04.609 "is_configured": true, 00:17:04.609 "data_offset": 256, 00:17:04.609 "data_size": 7936 00:17:04.609 } 00:17:04.609 ] 00:17:04.609 }' 00:17:04.609 03:17:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.609 03:17:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.869 03:17:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:04.869 03:17:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.869 03:17:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:04.869 03:17:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:04.869 03:17:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.869 03:17:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.869 03:17:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.869 03:17:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.869 03:17:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.129 03:17:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.129 03:17:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.129 "name": "raid_bdev1", 00:17:05.129 "uuid": "e79bae18-2feb-41b4-9168-0dd39cc2ab2b", 00:17:05.129 "strip_size_kb": 0, 00:17:05.129 "state": "online", 00:17:05.129 "raid_level": "raid1", 00:17:05.129 "superblock": true, 00:17:05.129 "num_base_bdevs": 2, 00:17:05.129 "num_base_bdevs_discovered": 1, 00:17:05.129 "num_base_bdevs_operational": 1, 00:17:05.129 "base_bdevs_list": [ 00:17:05.129 { 00:17:05.129 "name": null, 00:17:05.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.129 "is_configured": false, 00:17:05.129 "data_offset": 0, 00:17:05.129 "data_size": 7936 00:17:05.129 }, 00:17:05.129 { 00:17:05.129 "name": "BaseBdev2", 00:17:05.129 "uuid": "78a6f61b-1312-5d5d-94d9-c0f122ac82ea", 00:17:05.129 "is_configured": true, 00:17:05.129 "data_offset": 256, 00:17:05.129 "data_size": 7936 00:17:05.129 } 00:17:05.129 ] 00:17:05.129 }' 00:17:05.129 03:17:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.129 03:17:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:05.129 03:17:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.129 03:17:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:05.129 03:17:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:05.129 03:17:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.129 03:17:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.129 03:17:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.129 03:17:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:05.129 03:17:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.129 03:17:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.129 [2024-11-18 03:17:08.595783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:05.129 [2024-11-18 03:17:08.595853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.129 [2024-11-18 03:17:08.595874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:05.129 [2024-11-18 03:17:08.595885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.129 [2024-11-18 03:17:08.596050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.129 [2024-11-18 03:17:08.596072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:05.129 [2024-11-18 03:17:08.596123] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:05.129 [2024-11-18 03:17:08.596149] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:05.129 [2024-11-18 03:17:08.596157] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:05.129 [2024-11-18 03:17:08.596181] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:05.129 BaseBdev1 00:17:05.129 03:17:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.129 03:17:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:06.069 03:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:06.069 03:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.069 03:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.069 03:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.069 03:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.069 03:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:06.069 03:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.069 03:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.069 03:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.069 03:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.069 03:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.069 03:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.069 03:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.069 03:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.069 03:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.329 03:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.329 "name": "raid_bdev1", 00:17:06.329 "uuid": "e79bae18-2feb-41b4-9168-0dd39cc2ab2b", 00:17:06.329 "strip_size_kb": 0, 00:17:06.329 "state": "online", 00:17:06.329 "raid_level": "raid1", 00:17:06.329 "superblock": true, 00:17:06.329 "num_base_bdevs": 2, 00:17:06.329 "num_base_bdevs_discovered": 1, 00:17:06.329 "num_base_bdevs_operational": 1, 00:17:06.329 "base_bdevs_list": [ 00:17:06.329 { 00:17:06.329 "name": null, 00:17:06.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.329 "is_configured": false, 00:17:06.329 "data_offset": 0, 00:17:06.329 "data_size": 7936 00:17:06.329 }, 00:17:06.329 { 00:17:06.329 "name": "BaseBdev2", 00:17:06.329 "uuid": "78a6f61b-1312-5d5d-94d9-c0f122ac82ea", 00:17:06.329 "is_configured": true, 00:17:06.329 "data_offset": 256, 00:17:06.329 "data_size": 7936 00:17:06.329 } 00:17:06.329 ] 00:17:06.329 }' 00:17:06.329 03:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.329 03:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.588 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:06.588 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.588 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:06.588 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:06.588 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.588 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.588 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.588 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.588 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.588 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.588 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.588 "name": "raid_bdev1", 00:17:06.588 "uuid": "e79bae18-2feb-41b4-9168-0dd39cc2ab2b", 00:17:06.588 "strip_size_kb": 0, 00:17:06.588 "state": "online", 00:17:06.588 "raid_level": "raid1", 00:17:06.588 "superblock": true, 00:17:06.588 "num_base_bdevs": 2, 00:17:06.588 "num_base_bdevs_discovered": 1, 00:17:06.588 "num_base_bdevs_operational": 1, 00:17:06.588 "base_bdevs_list": [ 00:17:06.588 { 00:17:06.588 "name": null, 00:17:06.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.588 "is_configured": false, 00:17:06.588 "data_offset": 0, 00:17:06.588 "data_size": 7936 00:17:06.588 }, 00:17:06.588 { 00:17:06.588 "name": "BaseBdev2", 00:17:06.588 "uuid": "78a6f61b-1312-5d5d-94d9-c0f122ac82ea", 00:17:06.588 "is_configured": true, 00:17:06.588 "data_offset": 256, 00:17:06.588 "data_size": 7936 00:17:06.588 } 00:17:06.588 ] 00:17:06.588 }' 00:17:06.588 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.588 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:06.588 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.848 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:06.848 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:06.848 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:17:06.848 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:06.848 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:06.848 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:06.848 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:06.848 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:06.848 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:06.848 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.848 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.848 [2024-11-18 03:17:10.201124] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:06.848 [2024-11-18 03:17:10.201301] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:06.848 [2024-11-18 03:17:10.201314] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:06.848 request: 00:17:06.848 { 00:17:06.848 "base_bdev": "BaseBdev1", 00:17:06.848 "raid_bdev": "raid_bdev1", 00:17:06.848 "method": "bdev_raid_add_base_bdev", 00:17:06.848 "req_id": 1 00:17:06.848 } 00:17:06.848 Got JSON-RPC error response 00:17:06.848 response: 00:17:06.848 { 00:17:06.848 "code": -22, 00:17:06.848 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:06.848 } 00:17:06.848 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:06.848 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:17:06.848 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:06.848 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:06.849 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:06.849 03:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:07.786 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:07.786 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.786 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.786 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.786 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.786 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:07.786 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.786 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.786 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.786 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.786 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.786 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.786 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.786 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:07.786 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.786 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.786 "name": "raid_bdev1", 00:17:07.786 "uuid": "e79bae18-2feb-41b4-9168-0dd39cc2ab2b", 00:17:07.786 "strip_size_kb": 0, 00:17:07.786 "state": "online", 00:17:07.786 "raid_level": "raid1", 00:17:07.786 "superblock": true, 00:17:07.786 "num_base_bdevs": 2, 00:17:07.786 "num_base_bdevs_discovered": 1, 00:17:07.786 "num_base_bdevs_operational": 1, 00:17:07.786 "base_bdevs_list": [ 00:17:07.786 { 00:17:07.786 "name": null, 00:17:07.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.786 "is_configured": false, 00:17:07.786 "data_offset": 0, 00:17:07.786 "data_size": 7936 00:17:07.786 }, 00:17:07.786 { 00:17:07.786 "name": "BaseBdev2", 00:17:07.786 "uuid": "78a6f61b-1312-5d5d-94d9-c0f122ac82ea", 00:17:07.786 "is_configured": true, 00:17:07.786 "data_offset": 256, 00:17:07.786 "data_size": 7936 00:17:07.786 } 00:17:07.786 ] 00:17:07.786 }' 00:17:07.786 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.786 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.046 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:08.046 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.047 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:08.047 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:08.047 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.047 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.047 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.047 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.047 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.047 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.307 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.307 "name": "raid_bdev1", 00:17:08.307 "uuid": "e79bae18-2feb-41b4-9168-0dd39cc2ab2b", 00:17:08.307 "strip_size_kb": 0, 00:17:08.307 "state": "online", 00:17:08.307 "raid_level": "raid1", 00:17:08.307 "superblock": true, 00:17:08.307 "num_base_bdevs": 2, 00:17:08.307 "num_base_bdevs_discovered": 1, 00:17:08.307 "num_base_bdevs_operational": 1, 00:17:08.307 "base_bdevs_list": [ 00:17:08.307 { 00:17:08.307 "name": null, 00:17:08.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.307 "is_configured": false, 00:17:08.307 "data_offset": 0, 00:17:08.307 "data_size": 7936 00:17:08.307 }, 00:17:08.307 { 00:17:08.307 "name": "BaseBdev2", 00:17:08.307 "uuid": "78a6f61b-1312-5d5d-94d9-c0f122ac82ea", 00:17:08.307 "is_configured": true, 00:17:08.307 "data_offset": 256, 00:17:08.307 "data_size": 7936 00:17:08.307 } 00:17:08.307 ] 00:17:08.307 }' 00:17:08.307 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.307 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:08.307 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.307 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:08.307 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 99440 00:17:08.307 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99440 ']' 00:17:08.307 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99440 00:17:08.307 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:17:08.307 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:08.307 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99440 00:17:08.307 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:08.307 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:08.307 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99440' 00:17:08.307 killing process with pid 99440 00:17:08.307 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 99440 00:17:08.307 Received shutdown signal, test time was about 60.000000 seconds 00:17:08.307 00:17:08.307 Latency(us) 00:17:08.307 [2024-11-18T03:17:11.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.307 [2024-11-18T03:17:11.884Z] =================================================================================================================== 00:17:08.307 [2024-11-18T03:17:11.884Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:08.307 [2024-11-18 03:17:11.749439] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:08.307 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 99440 00:17:08.307 [2024-11-18 03:17:11.749579] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:08.307 [2024-11-18 03:17:11.749645] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:08.307 [2024-11-18 03:17:11.749659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:17:08.307 [2024-11-18 03:17:11.783231] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:08.567 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:17:08.567 00:17:08.567 real 0m16.081s 00:17:08.567 user 0m21.474s 00:17:08.567 sys 0m1.619s 00:17:08.567 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:08.567 03:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.567 ************************************ 00:17:08.567 END TEST raid_rebuild_test_sb_md_interleaved 00:17:08.567 ************************************ 00:17:08.567 03:17:12 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:17:08.567 03:17:12 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:17:08.567 03:17:12 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 99440 ']' 00:17:08.567 03:17:12 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 99440 00:17:08.567 03:17:12 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:17:08.567 00:17:08.567 real 9m55.797s 00:17:08.567 user 14m10.620s 00:17:08.567 sys 1m46.189s 00:17:08.567 03:17:12 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:08.567 03:17:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:08.567 ************************************ 00:17:08.567 END TEST bdev_raid 00:17:08.567 ************************************ 00:17:08.827 03:17:12 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:08.827 03:17:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:08.827 03:17:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:08.827 03:17:12 -- common/autotest_common.sh@10 -- # set +x 00:17:08.827 ************************************ 00:17:08.827 START TEST spdkcli_raid 00:17:08.827 ************************************ 00:17:08.827 03:17:12 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:08.827 * Looking for test storage... 00:17:08.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:08.827 03:17:12 spdkcli_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:08.827 03:17:12 spdkcli_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:17:08.827 03:17:12 spdkcli_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:08.827 03:17:12 spdkcli_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:08.827 03:17:12 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:08.827 03:17:12 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:08.827 03:17:12 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:08.827 03:17:12 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:17:08.827 03:17:12 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:17:08.827 03:17:12 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:17:08.827 03:17:12 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:17:08.827 03:17:12 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:17:08.827 03:17:12 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:17:08.827 03:17:12 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:17:08.827 03:17:12 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:08.827 03:17:12 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:17:08.827 03:17:12 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:17:08.827 03:17:12 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:08.827 03:17:12 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:08.827 03:17:12 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:17:08.827 03:17:12 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:17:08.827 03:17:12 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:08.827 03:17:12 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:17:08.827 03:17:12 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:08.827 03:17:12 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:17:08.827 03:17:12 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:17:08.827 03:17:12 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:08.827 03:17:12 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:17:08.827 03:17:12 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:08.827 03:17:12 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:08.827 03:17:12 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:08.827 03:17:12 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:17:08.827 03:17:12 spdkcli_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:08.827 03:17:12 spdkcli_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:08.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.827 --rc genhtml_branch_coverage=1 00:17:08.827 --rc genhtml_function_coverage=1 00:17:08.827 --rc genhtml_legend=1 00:17:08.827 --rc geninfo_all_blocks=1 00:17:08.827 --rc geninfo_unexecuted_blocks=1 00:17:08.827 00:17:08.827 ' 00:17:08.827 03:17:12 spdkcli_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:08.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.827 --rc genhtml_branch_coverage=1 00:17:08.827 --rc genhtml_function_coverage=1 00:17:08.827 --rc genhtml_legend=1 00:17:08.827 --rc geninfo_all_blocks=1 00:17:08.827 --rc geninfo_unexecuted_blocks=1 00:17:08.827 00:17:08.827 ' 00:17:08.827 03:17:12 spdkcli_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:08.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.827 --rc genhtml_branch_coverage=1 00:17:08.827 --rc genhtml_function_coverage=1 00:17:08.827 --rc genhtml_legend=1 00:17:08.827 --rc geninfo_all_blocks=1 00:17:08.827 --rc geninfo_unexecuted_blocks=1 00:17:08.827 00:17:08.827 ' 00:17:08.827 03:17:12 spdkcli_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:08.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.827 --rc genhtml_branch_coverage=1 00:17:08.828 --rc genhtml_function_coverage=1 00:17:08.828 --rc genhtml_legend=1 00:17:08.828 --rc geninfo_all_blocks=1 00:17:08.828 --rc geninfo_unexecuted_blocks=1 00:17:08.828 00:17:08.828 ' 00:17:08.828 03:17:12 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:08.828 03:17:12 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:08.828 03:17:12 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:08.828 03:17:12 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:17:08.828 03:17:12 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:17:08.828 03:17:12 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:17:08.828 03:17:12 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:17:08.828 03:17:12 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:17:08.828 03:17:12 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:17:08.828 03:17:12 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:17:08.828 03:17:12 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:17:08.828 03:17:12 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:17:08.828 03:17:12 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:17:08.828 03:17:12 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:17:08.828 03:17:12 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:17:08.828 03:17:12 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:17:08.828 03:17:12 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:17:08.828 03:17:12 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:17:08.828 03:17:12 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:17:08.828 03:17:12 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:17:08.828 03:17:12 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:17:08.828 03:17:12 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:17:08.828 03:17:12 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:17:08.828 03:17:12 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:17:08.828 03:17:12 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:17:08.828 03:17:12 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:08.828 03:17:12 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:08.828 03:17:12 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:08.828 03:17:12 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:08.828 03:17:12 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:08.828 03:17:12 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:08.828 03:17:12 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:17:08.828 03:17:12 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:17:08.828 03:17:12 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:08.828 03:17:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:08.828 03:17:12 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:17:08.828 03:17:12 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=100109 00:17:08.828 03:17:12 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:17:08.828 03:17:12 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 100109 00:17:08.828 03:17:12 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 100109 ']' 00:17:08.828 03:17:12 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.828 03:17:12 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:08.828 03:17:12 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.828 03:17:12 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:08.828 03:17:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:09.088 [2024-11-18 03:17:12.463278] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:09.088 [2024-11-18 03:17:12.463399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100109 ] 00:17:09.088 [2024-11-18 03:17:12.624504] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:09.347 [2024-11-18 03:17:12.675719] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.347 [2024-11-18 03:17:12.675751] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.915 03:17:13 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:09.915 03:17:13 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:17:09.915 03:17:13 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:17:09.915 03:17:13 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:09.915 03:17:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:09.915 03:17:13 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:17:09.915 03:17:13 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:09.915 03:17:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:09.915 03:17:13 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:17:09.915 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:17:09.915 ' 00:17:11.812 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:17:11.812 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:17:11.812 03:17:14 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:17:11.812 03:17:14 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:11.812 03:17:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:11.812 03:17:15 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:17:11.812 03:17:15 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:11.812 03:17:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:11.812 03:17:15 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:17:11.812 ' 00:17:12.746 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:17:12.746 03:17:16 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:17:12.746 03:17:16 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:12.746 03:17:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:12.746 03:17:16 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:17:12.746 03:17:16 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:12.746 03:17:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:12.746 03:17:16 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:17:12.746 03:17:16 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:17:13.312 03:17:16 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:17:13.312 03:17:16 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:17:13.312 03:17:16 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:17:13.312 03:17:16 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:13.312 03:17:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:13.312 03:17:16 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:17:13.312 03:17:16 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:13.312 03:17:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:13.312 03:17:16 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:17:13.312 ' 00:17:14.268 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:17:14.526 03:17:17 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:17:14.526 03:17:17 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:14.526 03:17:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:14.526 03:17:17 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:17:14.526 03:17:17 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:14.526 03:17:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:14.526 03:17:18 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:17:14.526 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:17:14.526 ' 00:17:15.895 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:17:15.895 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:17:15.895 03:17:19 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:17:15.895 03:17:19 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:15.895 03:17:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:16.152 03:17:19 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 100109 00:17:16.152 03:17:19 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 100109 ']' 00:17:16.152 03:17:19 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 100109 00:17:16.152 03:17:19 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:17:16.152 03:17:19 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:16.152 03:17:19 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100109 00:17:16.152 03:17:19 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:16.152 03:17:19 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:16.152 killing process with pid 100109 00:17:16.152 03:17:19 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100109' 00:17:16.152 03:17:19 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 100109 00:17:16.152 03:17:19 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 100109 00:17:16.452 03:17:19 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:17:16.452 03:17:19 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 100109 ']' 00:17:16.452 03:17:19 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 100109 00:17:16.452 03:17:19 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 100109 ']' 00:17:16.452 03:17:19 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 100109 00:17:16.452 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (100109) - No such process 00:17:16.452 Process with pid 100109 is not found 00:17:16.452 03:17:19 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 100109 is not found' 00:17:16.452 03:17:19 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:17:16.452 03:17:19 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:17:16.452 03:17:19 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:17:16.452 03:17:19 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:17:16.452 00:17:16.452 real 0m7.798s 00:17:16.452 user 0m16.608s 00:17:16.452 sys 0m1.076s 00:17:16.452 03:17:19 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:16.452 03:17:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:16.452 ************************************ 00:17:16.452 END TEST spdkcli_raid 00:17:16.452 ************************************ 00:17:16.452 03:17:20 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:17:16.452 03:17:20 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:16.452 03:17:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:16.453 03:17:20 -- common/autotest_common.sh@10 -- # set +x 00:17:16.453 ************************************ 00:17:16.453 START TEST blockdev_raid5f 00:17:16.453 ************************************ 00:17:16.453 03:17:20 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:17:16.710 * Looking for test storage... 00:17:16.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:17:16.710 03:17:20 blockdev_raid5f -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:16.710 03:17:20 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lcov --version 00:17:16.710 03:17:20 blockdev_raid5f -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:16.710 03:17:20 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:16.710 03:17:20 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:16.710 03:17:20 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:16.710 03:17:20 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:16.710 03:17:20 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:17:16.710 03:17:20 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:17:16.710 03:17:20 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:17:16.710 03:17:20 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:17:16.710 03:17:20 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:17:16.710 03:17:20 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:17:16.710 03:17:20 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:17:16.710 03:17:20 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:16.710 03:17:20 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:17:16.710 03:17:20 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:17:16.710 03:17:20 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:16.710 03:17:20 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:16.710 03:17:20 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:17:16.710 03:17:20 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:17:16.710 03:17:20 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:16.710 03:17:20 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:17:16.710 03:17:20 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:17:16.710 03:17:20 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:17:16.710 03:17:20 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:17:16.710 03:17:20 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:16.710 03:17:20 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:17:16.710 03:17:20 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:17:16.710 03:17:20 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:16.710 03:17:20 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:16.710 03:17:20 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:17:16.710 03:17:20 blockdev_raid5f -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:16.710 03:17:20 blockdev_raid5f -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:16.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.710 --rc genhtml_branch_coverage=1 00:17:16.710 --rc genhtml_function_coverage=1 00:17:16.710 --rc genhtml_legend=1 00:17:16.710 --rc geninfo_all_blocks=1 00:17:16.710 --rc geninfo_unexecuted_blocks=1 00:17:16.710 00:17:16.710 ' 00:17:16.710 03:17:20 blockdev_raid5f -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:16.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.710 --rc genhtml_branch_coverage=1 00:17:16.710 --rc genhtml_function_coverage=1 00:17:16.710 --rc genhtml_legend=1 00:17:16.710 --rc geninfo_all_blocks=1 00:17:16.710 --rc geninfo_unexecuted_blocks=1 00:17:16.710 00:17:16.710 ' 00:17:16.710 03:17:20 blockdev_raid5f -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:16.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.710 --rc genhtml_branch_coverage=1 00:17:16.710 --rc genhtml_function_coverage=1 00:17:16.710 --rc genhtml_legend=1 00:17:16.710 --rc geninfo_all_blocks=1 00:17:16.710 --rc geninfo_unexecuted_blocks=1 00:17:16.710 00:17:16.710 ' 00:17:16.710 03:17:20 blockdev_raid5f -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:16.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.710 --rc genhtml_branch_coverage=1 00:17:16.710 --rc genhtml_function_coverage=1 00:17:16.710 --rc genhtml_legend=1 00:17:16.710 --rc geninfo_all_blocks=1 00:17:16.710 --rc geninfo_unexecuted_blocks=1 00:17:16.710 00:17:16.710 ' 00:17:16.710 03:17:20 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:16.710 03:17:20 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:17:16.710 03:17:20 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:17:16.711 03:17:20 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:16.711 03:17:20 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:17:16.711 03:17:20 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:17:16.711 03:17:20 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:17:16.711 03:17:20 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:17:16.711 03:17:20 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:17:16.711 03:17:20 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:17:16.711 03:17:20 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:17:16.711 03:17:20 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:17:16.711 03:17:20 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:17:16.711 03:17:20 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:17:16.711 03:17:20 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:17:16.711 03:17:20 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:17:16.711 03:17:20 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:17:16.711 03:17:20 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:17:16.711 03:17:20 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:17:16.711 03:17:20 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:17:16.711 03:17:20 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:17:16.711 03:17:20 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:17:16.711 03:17:20 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:17:16.711 03:17:20 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:17:16.711 03:17:20 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=100366 00:17:16.711 03:17:20 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:16.711 03:17:20 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:17:16.711 03:17:20 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 100366 00:17:16.711 03:17:20 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 100366 ']' 00:17:16.711 03:17:20 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.711 03:17:20 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:16.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.711 03:17:20 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.711 03:17:20 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:16.711 03:17:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:16.968 [2024-11-18 03:17:20.350488] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:16.968 [2024-11-18 03:17:20.350615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100366 ] 00:17:16.968 [2024-11-18 03:17:20.498515] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.225 [2024-11-18 03:17:20.547998] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.791 03:17:21 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:17.791 03:17:21 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:17:17.791 03:17:21 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:17:17.791 03:17:21 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:17:17.791 03:17:21 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:17:17.791 03:17:21 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.791 03:17:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:17.791 Malloc0 00:17:17.791 Malloc1 00:17:17.791 Malloc2 00:17:17.791 03:17:21 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.791 03:17:21 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:17:17.791 03:17:21 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.791 03:17:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:17.791 03:17:21 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.791 03:17:21 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:17:17.791 03:17:21 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:17:17.791 03:17:21 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.791 03:17:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:17.791 03:17:21 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.791 03:17:21 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:17:17.791 03:17:21 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.791 03:17:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:17.791 03:17:21 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.791 03:17:21 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:17:17.791 03:17:21 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.791 03:17:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:17.791 03:17:21 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.791 03:17:21 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:17:17.791 03:17:21 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:17:17.791 03:17:21 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:17:17.791 03:17:21 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.791 03:17:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:17.791 03:17:21 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.791 03:17:21 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:17:17.791 03:17:21 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:17:17.791 03:17:21 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "d885b39c-e2b8-4761-b3dc-951408a600c7"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "d885b39c-e2b8-4761-b3dc-951408a600c7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "d885b39c-e2b8-4761-b3dc-951408a600c7",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "ac30dc6b-4980-4853-bd2d-f07b45b13c12",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "d308001b-761b-4c23-82fa-e95d5a12efe3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "353d8a6c-bd31-40d7-9e77-33e18c83d034",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:17:18.049 03:17:21 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:17:18.049 03:17:21 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:17:18.049 03:17:21 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:17:18.049 03:17:21 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 100366 00:17:18.049 03:17:21 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 100366 ']' 00:17:18.049 03:17:21 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 100366 00:17:18.049 03:17:21 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:17:18.049 03:17:21 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:18.049 03:17:21 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100366 00:17:18.049 03:17:21 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:18.049 03:17:21 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:18.049 killing process with pid 100366 00:17:18.049 03:17:21 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100366' 00:17:18.049 03:17:21 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 100366 00:17:18.049 03:17:21 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 100366 00:17:18.307 03:17:21 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:18.307 03:17:21 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:17:18.307 03:17:21 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:18.307 03:17:21 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:18.307 03:17:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:18.307 ************************************ 00:17:18.307 START TEST bdev_hello_world 00:17:18.307 ************************************ 00:17:18.307 03:17:21 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:17:18.565 [2024-11-18 03:17:21.938726] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:18.565 [2024-11-18 03:17:21.938864] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100404 ] 00:17:18.565 [2024-11-18 03:17:22.100981] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.824 [2024-11-18 03:17:22.151059] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.824 [2024-11-18 03:17:22.336763] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:17:18.824 [2024-11-18 03:17:22.336822] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:17:18.824 [2024-11-18 03:17:22.336839] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:17:18.824 [2024-11-18 03:17:22.337198] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:17:18.824 [2024-11-18 03:17:22.337335] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:17:18.824 [2024-11-18 03:17:22.337376] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:17:18.824 [2024-11-18 03:17:22.337468] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:17:18.824 00:17:18.824 [2024-11-18 03:17:22.337497] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:17:19.082 00:17:19.082 real 0m0.735s 00:17:19.082 user 0m0.403s 00:17:19.082 sys 0m0.217s 00:17:19.082 03:17:22 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:19.082 03:17:22 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:19.082 ************************************ 00:17:19.082 END TEST bdev_hello_world 00:17:19.082 ************************************ 00:17:19.082 03:17:22 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:17:19.082 03:17:22 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:19.082 03:17:22 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:19.082 03:17:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:19.341 ************************************ 00:17:19.341 START TEST bdev_bounds 00:17:19.341 ************************************ 00:17:19.341 03:17:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:17:19.341 03:17:22 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=100435 00:17:19.341 03:17:22 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:19.341 03:17:22 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:17:19.341 03:17:22 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 100435' 00:17:19.341 Process bdevio pid: 100435 00:17:19.341 03:17:22 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 100435 00:17:19.341 03:17:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 100435 ']' 00:17:19.341 03:17:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.341 03:17:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:19.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.341 03:17:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.341 03:17:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:19.341 03:17:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:19.341 [2024-11-18 03:17:22.738733] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:19.341 [2024-11-18 03:17:22.738876] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100435 ] 00:17:19.341 [2024-11-18 03:17:22.900621] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:19.599 [2024-11-18 03:17:22.952705] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.599 [2024-11-18 03:17:22.952797] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.599 [2024-11-18 03:17:22.952899] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.167 03:17:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:20.167 03:17:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:17:20.167 03:17:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:17:20.167 I/O targets: 00:17:20.167 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:17:20.167 00:17:20.167 00:17:20.167 CUnit - A unit testing framework for C - Version 2.1-3 00:17:20.167 http://cunit.sourceforge.net/ 00:17:20.167 00:17:20.167 00:17:20.167 Suite: bdevio tests on: raid5f 00:17:20.167 Test: blockdev write read block ...passed 00:17:20.167 Test: blockdev write zeroes read block ...passed 00:17:20.167 Test: blockdev write zeroes read no split ...passed 00:17:20.426 Test: blockdev write zeroes read split ...passed 00:17:20.426 Test: blockdev write zeroes read split partial ...passed 00:17:20.426 Test: blockdev reset ...passed 00:17:20.426 Test: blockdev write read 8 blocks ...passed 00:17:20.426 Test: blockdev write read size > 128k ...passed 00:17:20.426 Test: blockdev write read invalid size ...passed 00:17:20.426 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:20.426 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:20.426 Test: blockdev write read max offset ...passed 00:17:20.426 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:20.426 Test: blockdev writev readv 8 blocks ...passed 00:17:20.426 Test: blockdev writev readv 30 x 1block ...passed 00:17:20.426 Test: blockdev writev readv block ...passed 00:17:20.426 Test: blockdev writev readv size > 128k ...passed 00:17:20.426 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:20.426 Test: blockdev comparev and writev ...passed 00:17:20.426 Test: blockdev nvme passthru rw ...passed 00:17:20.426 Test: blockdev nvme passthru vendor specific ...passed 00:17:20.426 Test: blockdev nvme admin passthru ...passed 00:17:20.426 Test: blockdev copy ...passed 00:17:20.426 00:17:20.426 Run Summary: Type Total Ran Passed Failed Inactive 00:17:20.426 suites 1 1 n/a 0 0 00:17:20.426 tests 23 23 23 0 0 00:17:20.426 asserts 130 130 130 0 n/a 00:17:20.426 00:17:20.426 Elapsed time = 0.363 seconds 00:17:20.426 0 00:17:20.426 03:17:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 100435 00:17:20.426 03:17:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 100435 ']' 00:17:20.426 03:17:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 100435 00:17:20.426 03:17:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:17:20.426 03:17:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:20.426 03:17:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100435 00:17:20.426 killing process with pid 100435 00:17:20.426 03:17:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:20.426 03:17:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:20.426 03:17:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100435' 00:17:20.426 03:17:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 100435 00:17:20.426 03:17:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 100435 00:17:20.684 03:17:24 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:17:20.684 00:17:20.684 real 0m1.506s 00:17:20.684 user 0m3.591s 00:17:20.684 sys 0m0.347s 00:17:20.684 03:17:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:20.684 03:17:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:20.684 ************************************ 00:17:20.684 END TEST bdev_bounds 00:17:20.684 ************************************ 00:17:20.684 03:17:24 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:17:20.684 03:17:24 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:20.684 03:17:24 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:20.684 03:17:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:20.684 ************************************ 00:17:20.684 START TEST bdev_nbd 00:17:20.684 ************************************ 00:17:20.684 03:17:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:17:20.684 03:17:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:17:20.684 03:17:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:17:20.684 03:17:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:20.684 03:17:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:20.684 03:17:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:17:20.684 03:17:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:17:20.684 03:17:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:17:20.684 03:17:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:17:20.685 03:17:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:17:20.685 03:17:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:17:20.685 03:17:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:17:20.685 03:17:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:17:20.685 03:17:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:17:20.685 03:17:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:17:20.685 03:17:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:17:20.685 03:17:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=100484 00:17:20.685 03:17:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:17:20.685 03:17:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:20.685 03:17:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 100484 /var/tmp/spdk-nbd.sock 00:17:20.685 03:17:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 100484 ']' 00:17:20.685 03:17:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:20.685 03:17:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:20.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:20.685 03:17:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:20.685 03:17:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:20.685 03:17:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:20.943 [2024-11-18 03:17:24.327052] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:20.943 [2024-11-18 03:17:24.327211] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.944 [2024-11-18 03:17:24.471831] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.202 [2024-11-18 03:17:24.522251] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.771 03:17:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:21.771 03:17:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:17:21.771 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:17:21.771 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:21.771 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:17:21.771 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:17:21.771 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:17:21.771 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:21.771 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:17:21.771 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:17:21.771 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:17:21.771 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:17:21.771 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:17:21.771 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:17:21.771 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:17:22.030 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:17:22.030 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:17:22.030 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:17:22.030 03:17:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:22.030 03:17:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:22.030 03:17:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:22.030 03:17:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:22.030 03:17:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:22.030 03:17:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:22.030 03:17:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:22.030 03:17:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:22.030 03:17:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:22.030 1+0 records in 00:17:22.030 1+0 records out 00:17:22.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000470436 s, 8.7 MB/s 00:17:22.030 03:17:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.030 03:17:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:22.030 03:17:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.030 03:17:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:22.030 03:17:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:22.030 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:22.030 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:17:22.030 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:22.290 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:17:22.290 { 00:17:22.290 "nbd_device": "/dev/nbd0", 00:17:22.290 "bdev_name": "raid5f" 00:17:22.290 } 00:17:22.290 ]' 00:17:22.290 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:17:22.290 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:17:22.290 { 00:17:22.290 "nbd_device": "/dev/nbd0", 00:17:22.290 "bdev_name": "raid5f" 00:17:22.290 } 00:17:22.290 ]' 00:17:22.290 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:17:22.290 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:22.290 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:22.290 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:22.290 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:22.290 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:22.290 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:22.290 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:22.549 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:22.549 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:22.549 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:22.549 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:22.549 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:22.549 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:22.549 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:22.549 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:22.549 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:22.549 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:22.549 03:17:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:22.808 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:22.809 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:22.809 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:22.809 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:22.809 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:22.809 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:22.809 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:22.809 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:22.809 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:22.809 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:17:22.809 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:17:22.809 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:17:22.809 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:17:22.809 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:22.809 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:17:22.809 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:22.809 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:17:22.809 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:22.809 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:17:22.809 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:22.809 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:17:22.809 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:22.809 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:22.809 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:22.809 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:17:22.809 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:22.809 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:22.809 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:17:22.809 /dev/nbd0 00:17:23.069 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:23.069 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:23.069 03:17:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:23.069 03:17:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:23.069 03:17:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:23.069 03:17:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:23.069 03:17:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:23.069 03:17:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:23.069 03:17:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:23.069 03:17:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:23.069 03:17:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:23.069 1+0 records in 00:17:23.069 1+0 records out 00:17:23.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294957 s, 13.9 MB/s 00:17:23.069 03:17:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.069 03:17:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:23.069 03:17:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.069 03:17:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:23.069 03:17:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:23.069 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:23.069 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:23.069 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:23.069 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:23.069 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:23.069 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:23.069 { 00:17:23.069 "nbd_device": "/dev/nbd0", 00:17:23.069 "bdev_name": "raid5f" 00:17:23.069 } 00:17:23.069 ]' 00:17:23.069 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:23.069 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:23.069 { 00:17:23.069 "nbd_device": "/dev/nbd0", 00:17:23.069 "bdev_name": "raid5f" 00:17:23.069 } 00:17:23.069 ]' 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:17:23.329 256+0 records in 00:17:23.329 256+0 records out 00:17:23.329 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128718 s, 81.5 MB/s 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:23.329 256+0 records in 00:17:23.329 256+0 records out 00:17:23.329 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307733 s, 34.1 MB/s 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:23.329 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:23.589 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:23.589 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:23.589 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:23.589 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:23.589 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:23.589 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:23.589 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:23.589 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:23.589 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:23.589 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:23.589 03:17:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:23.848 03:17:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:23.848 03:17:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:23.848 03:17:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:23.848 03:17:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:23.848 03:17:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:23.848 03:17:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:23.848 03:17:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:23.848 03:17:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:23.848 03:17:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:23.848 03:17:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:17:23.848 03:17:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:23.848 03:17:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:17:23.848 03:17:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:23.848 03:17:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:23.848 03:17:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:17:23.848 03:17:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:17:23.848 malloc_lvol_verify 00:17:24.108 03:17:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:17:24.108 ff85ea30-cc9f-48ad-9373-e4635d77eddc 00:17:24.108 03:17:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:17:24.367 d2ab5b60-a268-478d-bc9e-8b303bacced5 00:17:24.367 03:17:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:24.626 /dev/nbd0 00:17:24.626 03:17:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:17:24.626 03:17:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:17:24.626 03:17:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:17:24.626 03:17:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:17:24.626 03:17:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:17:24.626 mke2fs 1.47.0 (5-Feb-2023) 00:17:24.626 Discarding device blocks: 0/4096 done 00:17:24.626 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:24.626 00:17:24.626 Allocating group tables: 0/1 done 00:17:24.626 Writing inode tables: 0/1 done 00:17:24.626 Creating journal (1024 blocks): done 00:17:24.626 Writing superblocks and filesystem accounting information: 0/1 done 00:17:24.626 00:17:24.627 03:17:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:24.627 03:17:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:24.627 03:17:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:24.627 03:17:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:24.627 03:17:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:24.627 03:17:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:24.627 03:17:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:24.886 03:17:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:24.886 03:17:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:24.886 03:17:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:24.886 03:17:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:24.886 03:17:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:24.886 03:17:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:24.886 03:17:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:24.886 03:17:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:24.886 03:17:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 100484 00:17:24.886 03:17:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 100484 ']' 00:17:24.886 03:17:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 100484 00:17:24.886 03:17:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:17:24.886 03:17:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:24.886 03:17:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100484 00:17:24.886 03:17:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:24.886 03:17:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:24.886 03:17:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100484' 00:17:24.886 killing process with pid 100484 00:17:24.886 03:17:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 100484 00:17:24.886 03:17:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 100484 00:17:25.146 03:17:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:17:25.146 00:17:25.146 real 0m4.405s 00:17:25.146 user 0m6.436s 00:17:25.146 sys 0m1.262s 00:17:25.146 03:17:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:25.146 03:17:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:25.146 ************************************ 00:17:25.146 END TEST bdev_nbd 00:17:25.146 ************************************ 00:17:25.146 03:17:28 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:17:25.146 03:17:28 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:17:25.146 03:17:28 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:17:25.146 03:17:28 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:17:25.146 03:17:28 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:25.146 03:17:28 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:25.146 03:17:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:25.146 ************************************ 00:17:25.146 START TEST bdev_fio 00:17:25.146 ************************************ 00:17:25.146 03:17:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:17:25.146 03:17:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:17:25.146 03:17:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:17:25.146 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:17:25.146 03:17:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:25.146 03:17:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:17:25.146 03:17:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:17:25.146 03:17:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:17:25.146 03:17:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:25.146 03:17:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:25.146 03:17:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:17:25.146 03:17:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:17:25.146 03:17:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:17:25.146 03:17:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:17:25.146 03:17:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:25.406 ************************************ 00:17:25.406 START TEST bdev_fio_rw_verify 00:17:25.406 ************************************ 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:25.406 03:17:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:25.666 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:25.666 fio-3.35 00:17:25.666 Starting 1 thread 00:17:37.881 00:17:37.881 job_raid5f: (groupid=0, jobs=1): err= 0: pid=100672: Mon Nov 18 03:17:39 2024 00:17:37.881 read: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(450MiB/10001msec) 00:17:37.881 slat (nsec): min=18702, max=82200, avg=20523.40, stdev=2231.09 00:17:37.881 clat (usec): min=10, max=372, avg=138.19, stdev=49.54 00:17:37.881 lat (usec): min=29, max=395, avg=158.72, stdev=49.98 00:17:37.881 clat percentiles (usec): 00:17:37.881 | 50.000th=[ 143], 99.000th=[ 245], 99.900th=[ 281], 99.990th=[ 318], 00:17:37.881 | 99.999th=[ 343] 00:17:37.881 write: IOPS=12.1k, BW=47.2MiB/s (49.5MB/s)(466MiB/9866msec); 0 zone resets 00:17:37.881 slat (usec): min=8, max=221, avg=17.84, stdev= 3.63 00:17:37.881 clat (usec): min=58, max=1712, avg=316.75, stdev=47.41 00:17:37.881 lat (usec): min=75, max=1933, avg=334.59, stdev=48.67 00:17:37.881 clat percentiles (usec): 00:17:37.881 | 50.000th=[ 322], 99.000th=[ 429], 99.900th=[ 586], 99.990th=[ 1004], 00:17:37.881 | 99.999th=[ 1631] 00:17:37.881 bw ( KiB/s): min=43214, max=50736, per=98.83%, avg=47798.74, stdev=1937.94, samples=19 00:17:37.881 iops : min=10803, max=12684, avg=11949.84, stdev=484.61, samples=19 00:17:37.881 lat (usec) : 20=0.01%, 50=0.01%, 100=11.99%, 250=40.42%, 500=47.50% 00:17:37.881 lat (usec) : 750=0.06%, 1000=0.02% 00:17:37.881 lat (msec) : 2=0.01% 00:17:37.881 cpu : usr=99.02%, sys=0.38%, ctx=21, majf=0, minf=12611 00:17:37.881 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:37.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.881 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.881 issued rwts: total=115211,119286,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:37.881 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:37.881 00:17:37.881 Run status group 0 (all jobs): 00:17:37.881 READ: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=450MiB (472MB), run=10001-10001msec 00:17:37.881 WRITE: bw=47.2MiB/s (49.5MB/s), 47.2MiB/s-47.2MiB/s (49.5MB/s-49.5MB/s), io=466MiB (489MB), run=9866-9866msec 00:17:37.881 ----------------------------------------------------- 00:17:37.881 Suppressions used: 00:17:37.881 count bytes template 00:17:37.881 1 7 /usr/src/fio/parse.c 00:17:37.881 511 49056 /usr/src/fio/iolog.c 00:17:37.881 1 8 libtcmalloc_minimal.so 00:17:37.881 1 904 libcrypto.so 00:17:37.881 ----------------------------------------------------- 00:17:37.881 00:17:37.881 00:17:37.881 real 0m11.199s 00:17:37.881 user 0m11.046s 00:17:37.881 sys 0m0.732s 00:17:37.881 ************************************ 00:17:37.881 END TEST bdev_fio_rw_verify 00:17:37.881 ************************************ 00:17:37.881 03:17:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:37.881 03:17:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:37.881 03:17:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:37.881 03:17:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:37.881 03:17:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:37.881 03:17:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:37.881 03:17:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:17:37.881 03:17:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:17:37.881 03:17:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:17:37.881 03:17:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:17:37.881 03:17:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:37.881 03:17:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:17:37.881 03:17:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:17:37.881 03:17:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:37.881 03:17:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:17:37.881 03:17:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:17:37.881 03:17:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:17:37.881 03:17:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:17:37.881 03:17:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:37.881 03:17:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "d885b39c-e2b8-4761-b3dc-951408a600c7"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "d885b39c-e2b8-4761-b3dc-951408a600c7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "d885b39c-e2b8-4761-b3dc-951408a600c7",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "ac30dc6b-4980-4853-bd2d-f07b45b13c12",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "d308001b-761b-4c23-82fa-e95d5a12efe3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "353d8a6c-bd31-40d7-9e77-33e18c83d034",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:17:37.882 03:17:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:37.882 03:17:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:37.882 /home/vagrant/spdk_repo/spdk 00:17:37.882 03:17:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:37.882 03:17:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:37.882 03:17:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:37.882 00:17:37.882 real 0m11.466s 00:17:37.882 user 0m11.152s 00:17:37.882 sys 0m0.861s 00:17:37.882 ************************************ 00:17:37.882 END TEST bdev_fio 00:17:37.882 ************************************ 00:17:37.882 03:17:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:37.882 03:17:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:37.882 03:17:40 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:37.882 03:17:40 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:37.882 03:17:40 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:17:37.882 03:17:40 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:37.882 03:17:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:37.882 ************************************ 00:17:37.882 START TEST bdev_verify 00:17:37.882 ************************************ 00:17:37.882 03:17:40 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:37.882 [2024-11-18 03:17:40.320841] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:37.882 [2024-11-18 03:17:40.320994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100819 ] 00:17:37.882 [2024-11-18 03:17:40.478163] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:37.882 [2024-11-18 03:17:40.530052] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.882 [2024-11-18 03:17:40.530146] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.882 Running I/O for 5 seconds... 00:17:39.391 14234.00 IOPS, 55.60 MiB/s [2024-11-18T03:17:43.903Z] 14684.00 IOPS, 57.36 MiB/s [2024-11-18T03:17:44.841Z] 14836.67 IOPS, 57.96 MiB/s [2024-11-18T03:17:45.781Z] 15336.75 IOPS, 59.91 MiB/s [2024-11-18T03:17:45.781Z] 15343.00 IOPS, 59.93 MiB/s 00:17:42.204 Latency(us) 00:17:42.204 [2024-11-18T03:17:45.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.204 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:42.204 Verification LBA range: start 0x0 length 0x2000 00:17:42.204 raid5f : 5.02 7656.40 29.91 0.00 0.00 25117.31 339.84 23467.04 00:17:42.204 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:42.204 Verification LBA range: start 0x2000 length 0x2000 00:17:42.204 raid5f : 5.02 7678.87 30.00 0.00 0.00 24993.93 347.00 23352.57 00:17:42.204 [2024-11-18T03:17:45.781Z] =================================================================================================================== 00:17:42.204 [2024-11-18T03:17:45.781Z] Total : 15335.27 59.90 0.00 0.00 25055.53 339.84 23467.04 00:17:42.464 00:17:42.464 real 0m5.755s 00:17:42.464 user 0m10.680s 00:17:42.464 sys 0m0.252s 00:17:42.464 ************************************ 00:17:42.464 END TEST bdev_verify 00:17:42.464 ************************************ 00:17:42.464 03:17:45 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:42.464 03:17:45 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:42.723 03:17:46 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:42.723 03:17:46 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:17:42.723 03:17:46 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:42.723 03:17:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:42.723 ************************************ 00:17:42.723 START TEST bdev_verify_big_io 00:17:42.723 ************************************ 00:17:42.723 03:17:46 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:42.723 [2024-11-18 03:17:46.140985] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:42.723 [2024-11-18 03:17:46.141187] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100901 ] 00:17:42.982 [2024-11-18 03:17:46.301596] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:42.982 [2024-11-18 03:17:46.354596] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.982 [2024-11-18 03:17:46.354712] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.982 Running I/O for 5 seconds... 00:17:45.298 758.00 IOPS, 47.38 MiB/s [2024-11-18T03:17:49.817Z] 792.00 IOPS, 49.50 MiB/s [2024-11-18T03:17:50.755Z] 845.33 IOPS, 52.83 MiB/s [2024-11-18T03:17:51.694Z] 871.75 IOPS, 54.48 MiB/s [2024-11-18T03:17:51.694Z] 901.20 IOPS, 56.33 MiB/s 00:17:48.118 Latency(us) 00:17:48.118 [2024-11-18T03:17:51.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.118 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:48.118 Verification LBA range: start 0x0 length 0x200 00:17:48.118 raid5f : 5.13 445.90 27.87 0.00 0.00 7124166.89 186.02 342504.30 00:17:48.118 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:48.118 Verification LBA range: start 0x200 length 0x200 00:17:48.118 raid5f : 5.09 448.75 28.05 0.00 0.00 7014443.06 183.34 342504.30 00:17:48.118 [2024-11-18T03:17:51.695Z] =================================================================================================================== 00:17:48.118 [2024-11-18T03:17:51.695Z] Total : 894.65 55.92 0.00 0.00 7069304.98 183.34 342504.30 00:17:48.377 00:17:48.377 real 0m5.864s 00:17:48.377 user 0m10.908s 00:17:48.377 sys 0m0.237s 00:17:48.377 03:17:51 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:48.377 ************************************ 00:17:48.377 END TEST bdev_verify_big_io 00:17:48.377 ************************************ 00:17:48.377 03:17:51 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:48.636 03:17:51 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:48.636 03:17:51 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:48.636 03:17:51 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:48.636 03:17:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:48.636 ************************************ 00:17:48.636 START TEST bdev_write_zeroes 00:17:48.636 ************************************ 00:17:48.636 03:17:51 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:48.636 [2024-11-18 03:17:52.072743] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:48.636 [2024-11-18 03:17:52.072882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100987 ] 00:17:48.896 [2024-11-18 03:17:52.233375] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.896 [2024-11-18 03:17:52.284176] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.155 Running I/O for 1 seconds... 00:17:50.094 26799.00 IOPS, 104.68 MiB/s 00:17:50.094 Latency(us) 00:17:50.094 [2024-11-18T03:17:53.671Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.094 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:50.094 raid5f : 1.01 26763.53 104.55 0.00 0.00 4766.80 1724.26 7555.24 00:17:50.094 [2024-11-18T03:17:53.671Z] =================================================================================================================== 00:17:50.094 [2024-11-18T03:17:53.671Z] Total : 26763.53 104.55 0.00 0.00 4766.80 1724.26 7555.24 00:17:50.353 00:17:50.353 real 0m1.739s 00:17:50.353 user 0m1.387s 00:17:50.353 sys 0m0.231s 00:17:50.353 03:17:53 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:50.353 03:17:53 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:50.353 ************************************ 00:17:50.353 END TEST bdev_write_zeroes 00:17:50.353 ************************************ 00:17:50.353 03:17:53 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:50.353 03:17:53 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:50.353 03:17:53 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:50.353 03:17:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:50.353 ************************************ 00:17:50.353 START TEST bdev_json_nonenclosed 00:17:50.353 ************************************ 00:17:50.353 03:17:53 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:50.353 [2024-11-18 03:17:53.881836] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:50.353 [2024-11-18 03:17:53.881957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101019 ] 00:17:50.613 [2024-11-18 03:17:54.043669] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.613 [2024-11-18 03:17:54.094516] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.613 [2024-11-18 03:17:54.094616] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:50.613 [2024-11-18 03:17:54.094644] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:50.613 [2024-11-18 03:17:54.094662] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:50.873 00:17:50.873 real 0m0.416s 00:17:50.873 user 0m0.186s 00:17:50.873 sys 0m0.126s 00:17:50.873 03:17:54 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:50.873 03:17:54 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:50.873 ************************************ 00:17:50.873 END TEST bdev_json_nonenclosed 00:17:50.873 ************************************ 00:17:50.873 03:17:54 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:50.873 03:17:54 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:50.873 03:17:54 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:50.873 03:17:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:50.873 ************************************ 00:17:50.873 START TEST bdev_json_nonarray 00:17:50.873 ************************************ 00:17:50.873 03:17:54 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:50.873 [2024-11-18 03:17:54.367296] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:50.873 [2024-11-18 03:17:54.367515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101049 ] 00:17:51.132 [2024-11-18 03:17:54.528314] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.132 [2024-11-18 03:17:54.578799] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.132 [2024-11-18 03:17:54.579020] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:51.132 [2024-11-18 03:17:54.579098] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:51.132 [2024-11-18 03:17:54.579142] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:51.132 00:17:51.132 real 0m0.416s 00:17:51.132 user 0m0.182s 00:17:51.132 sys 0m0.129s 00:17:51.132 ************************************ 00:17:51.132 END TEST bdev_json_nonarray 00:17:51.132 ************************************ 00:17:51.132 03:17:54 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:51.132 03:17:54 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:51.392 03:17:54 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:17:51.392 03:17:54 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:17:51.392 03:17:54 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:17:51.392 03:17:54 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:17:51.392 03:17:54 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:17:51.392 03:17:54 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:51.392 03:17:54 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:51.392 03:17:54 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:17:51.392 03:17:54 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:17:51.392 03:17:54 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:17:51.392 03:17:54 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:17:51.392 00:17:51.392 real 0m34.750s 00:17:51.392 user 0m46.880s 00:17:51.392 sys 0m4.671s 00:17:51.392 03:17:54 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:51.392 ************************************ 00:17:51.392 END TEST blockdev_raid5f 00:17:51.392 ************************************ 00:17:51.392 03:17:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:51.392 03:17:54 -- spdk/autotest.sh@194 -- # uname -s 00:17:51.392 03:17:54 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:17:51.392 03:17:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:51.392 03:17:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:51.392 03:17:54 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:17:51.392 03:17:54 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:17:51.392 03:17:54 -- spdk/autotest.sh@256 -- # timing_exit lib 00:17:51.392 03:17:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:51.392 03:17:54 -- common/autotest_common.sh@10 -- # set +x 00:17:51.392 03:17:54 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:17:51.392 03:17:54 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:17:51.392 03:17:54 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:17:51.392 03:17:54 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:17:51.393 03:17:54 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:51.393 03:17:54 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:51.393 03:17:54 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:17:51.393 03:17:54 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:17:51.393 03:17:54 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:17:51.393 03:17:54 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:51.393 03:17:54 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:17:51.393 03:17:54 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:17:51.393 03:17:54 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:17:51.393 03:17:54 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:17:51.393 03:17:54 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:17:51.393 03:17:54 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:17:51.393 03:17:54 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:17:51.393 03:17:54 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:17:51.393 03:17:54 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:17:51.393 03:17:54 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:17:51.393 03:17:54 -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:51.393 03:17:54 -- common/autotest_common.sh@10 -- # set +x 00:17:51.393 03:17:54 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:17:51.393 03:17:54 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:17:51.393 03:17:54 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:17:51.393 03:17:54 -- common/autotest_common.sh@10 -- # set +x 00:17:53.930 INFO: APP EXITING 00:17:53.930 INFO: killing all VMs 00:17:53.930 INFO: killing vhost app 00:17:53.930 INFO: EXIT DONE 00:17:53.930 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:53.930 Waiting for block devices as requested 00:17:53.930 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:54.190 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:55.128 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:55.128 Cleaning 00:17:55.128 Removing: /var/run/dpdk/spdk0/config 00:17:55.128 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:17:55.128 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:17:55.128 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:17:55.128 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:17:55.128 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:17:55.128 Removing: /var/run/dpdk/spdk0/hugepage_info 00:17:55.128 Removing: /dev/shm/spdk_tgt_trace.pid69245 00:17:55.128 Removing: /var/run/dpdk/spdk0 00:17:55.128 Removing: /var/run/dpdk/spdk_pid100109 00:17:55.128 Removing: /var/run/dpdk/spdk_pid100366 00:17:55.128 Removing: /var/run/dpdk/spdk_pid100404 00:17:55.128 Removing: /var/run/dpdk/spdk_pid100435 00:17:55.128 Removing: /var/run/dpdk/spdk_pid100657 00:17:55.128 Removing: /var/run/dpdk/spdk_pid100819 00:17:55.128 Removing: /var/run/dpdk/spdk_pid100901 00:17:55.128 Removing: /var/run/dpdk/spdk_pid100987 00:17:55.128 Removing: /var/run/dpdk/spdk_pid101019 00:17:55.128 Removing: /var/run/dpdk/spdk_pid101049 00:17:55.128 Removing: /var/run/dpdk/spdk_pid69080 00:17:55.128 Removing: /var/run/dpdk/spdk_pid69245 00:17:55.128 Removing: /var/run/dpdk/spdk_pid69451 00:17:55.128 Removing: /var/run/dpdk/spdk_pid69538 00:17:55.128 Removing: /var/run/dpdk/spdk_pid69567 00:17:55.128 Removing: /var/run/dpdk/spdk_pid69679 00:17:55.128 Removing: /var/run/dpdk/spdk_pid69695 00:17:55.128 Removing: /var/run/dpdk/spdk_pid69879 00:17:55.128 Removing: /var/run/dpdk/spdk_pid69958 00:17:55.128 Removing: /var/run/dpdk/spdk_pid70043 00:17:55.128 Removing: /var/run/dpdk/spdk_pid70143 00:17:55.128 Removing: /var/run/dpdk/spdk_pid70218 00:17:55.128 Removing: /var/run/dpdk/spdk_pid70263 00:17:55.128 Removing: /var/run/dpdk/spdk_pid70294 00:17:55.128 Removing: /var/run/dpdk/spdk_pid70370 00:17:55.128 Removing: /var/run/dpdk/spdk_pid70475 00:17:55.128 Removing: /var/run/dpdk/spdk_pid70903 00:17:55.128 Removing: /var/run/dpdk/spdk_pid70951 00:17:55.128 Removing: /var/run/dpdk/spdk_pid71003 00:17:55.128 Removing: /var/run/dpdk/spdk_pid71019 00:17:55.128 Removing: /var/run/dpdk/spdk_pid71082 00:17:55.128 Removing: /var/run/dpdk/spdk_pid71094 00:17:55.128 Removing: /var/run/dpdk/spdk_pid71163 00:17:55.128 Removing: /var/run/dpdk/spdk_pid71179 00:17:55.128 Removing: /var/run/dpdk/spdk_pid71232 00:17:55.128 Removing: /var/run/dpdk/spdk_pid71249 00:17:55.128 Removing: /var/run/dpdk/spdk_pid71292 00:17:55.128 Removing: /var/run/dpdk/spdk_pid71310 00:17:55.128 Removing: /var/run/dpdk/spdk_pid71439 00:17:55.128 Removing: /var/run/dpdk/spdk_pid71481 00:17:55.128 Removing: /var/run/dpdk/spdk_pid71559 00:17:55.128 Removing: /var/run/dpdk/spdk_pid72731 00:17:55.128 Removing: /var/run/dpdk/spdk_pid72927 00:17:55.128 Removing: /var/run/dpdk/spdk_pid73062 00:17:55.128 Removing: /var/run/dpdk/spdk_pid73661 00:17:55.128 Removing: /var/run/dpdk/spdk_pid73866 00:17:55.128 Removing: /var/run/dpdk/spdk_pid73996 00:17:55.128 Removing: /var/run/dpdk/spdk_pid74601 00:17:55.128 Removing: /var/run/dpdk/spdk_pid74920 00:17:55.129 Removing: /var/run/dpdk/spdk_pid75049 00:17:55.129 Removing: /var/run/dpdk/spdk_pid76390 00:17:55.129 Removing: /var/run/dpdk/spdk_pid76632 00:17:55.129 Removing: /var/run/dpdk/spdk_pid76761 00:17:55.389 Removing: /var/run/dpdk/spdk_pid78102 00:17:55.389 Removing: /var/run/dpdk/spdk_pid78344 00:17:55.389 Removing: /var/run/dpdk/spdk_pid78473 00:17:55.389 Removing: /var/run/dpdk/spdk_pid79814 00:17:55.389 Removing: /var/run/dpdk/spdk_pid80243 00:17:55.389 Removing: /var/run/dpdk/spdk_pid80378 00:17:55.389 Removing: /var/run/dpdk/spdk_pid81808 00:17:55.389 Removing: /var/run/dpdk/spdk_pid82056 00:17:55.389 Removing: /var/run/dpdk/spdk_pid82185 00:17:55.389 Removing: /var/run/dpdk/spdk_pid83624 00:17:55.389 Removing: /var/run/dpdk/spdk_pid83873 00:17:55.389 Removing: /var/run/dpdk/spdk_pid84003 00:17:55.389 Removing: /var/run/dpdk/spdk_pid85433 00:17:55.389 Removing: /var/run/dpdk/spdk_pid85916 00:17:55.389 Removing: /var/run/dpdk/spdk_pid86045 00:17:55.389 Removing: /var/run/dpdk/spdk_pid86172 00:17:55.389 Removing: /var/run/dpdk/spdk_pid86579 00:17:55.389 Removing: /var/run/dpdk/spdk_pid87289 00:17:55.389 Removing: /var/run/dpdk/spdk_pid87650 00:17:55.389 Removing: /var/run/dpdk/spdk_pid88322 00:17:55.389 Removing: /var/run/dpdk/spdk_pid88748 00:17:55.389 Removing: /var/run/dpdk/spdk_pid89486 00:17:55.389 Removing: /var/run/dpdk/spdk_pid89878 00:17:55.389 Removing: /var/run/dpdk/spdk_pid91790 00:17:55.389 Removing: /var/run/dpdk/spdk_pid92220 00:17:55.389 Removing: /var/run/dpdk/spdk_pid92643 00:17:55.389 Removing: /var/run/dpdk/spdk_pid94668 00:17:55.389 Removing: /var/run/dpdk/spdk_pid95137 00:17:55.389 Removing: /var/run/dpdk/spdk_pid95631 00:17:55.389 Removing: /var/run/dpdk/spdk_pid96659 00:17:55.389 Removing: /var/run/dpdk/spdk_pid96974 00:17:55.389 Removing: /var/run/dpdk/spdk_pid97894 00:17:55.389 Removing: /var/run/dpdk/spdk_pid98205 00:17:55.389 Removing: /var/run/dpdk/spdk_pid99122 00:17:55.389 Removing: /var/run/dpdk/spdk_pid99440 00:17:55.389 Clean 00:17:55.389 03:17:58 -- common/autotest_common.sh@1451 -- # return 0 00:17:55.389 03:17:58 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:17:55.389 03:17:58 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:55.389 03:17:58 -- common/autotest_common.sh@10 -- # set +x 00:17:55.648 03:17:58 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:17:55.648 03:17:58 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:55.648 03:17:58 -- common/autotest_common.sh@10 -- # set +x 00:17:55.648 03:17:59 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:17:55.648 03:17:59 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:17:55.648 03:17:59 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:17:55.648 03:17:59 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:17:55.648 03:17:59 -- spdk/autotest.sh@394 -- # hostname 00:17:55.648 03:17:59 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:17:55.648 geninfo: WARNING: invalid characters removed from testname! 00:18:17.604 03:18:20 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:20.144 03:18:23 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:22.682 03:18:25 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:24.589 03:18:27 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:26.611 03:18:29 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:28.539 03:18:31 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:31.077 03:18:34 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:18:31.078 03:18:34 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:18:31.078 03:18:34 -- common/autotest_common.sh@1681 -- $ lcov --version 00:18:31.078 03:18:34 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:18:31.078 03:18:34 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:18:31.078 03:18:34 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:18:31.078 03:18:34 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:18:31.078 03:18:34 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:18:31.078 03:18:34 -- scripts/common.sh@336 -- $ IFS=.-: 00:18:31.078 03:18:34 -- scripts/common.sh@336 -- $ read -ra ver1 00:18:31.078 03:18:34 -- scripts/common.sh@337 -- $ IFS=.-: 00:18:31.078 03:18:34 -- scripts/common.sh@337 -- $ read -ra ver2 00:18:31.078 03:18:34 -- scripts/common.sh@338 -- $ local 'op=<' 00:18:31.078 03:18:34 -- scripts/common.sh@340 -- $ ver1_l=2 00:18:31.078 03:18:34 -- scripts/common.sh@341 -- $ ver2_l=1 00:18:31.078 03:18:34 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:18:31.078 03:18:34 -- scripts/common.sh@344 -- $ case "$op" in 00:18:31.078 03:18:34 -- scripts/common.sh@345 -- $ : 1 00:18:31.078 03:18:34 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:18:31.078 03:18:34 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:31.078 03:18:34 -- scripts/common.sh@365 -- $ decimal 1 00:18:31.078 03:18:34 -- scripts/common.sh@353 -- $ local d=1 00:18:31.078 03:18:34 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:18:31.078 03:18:34 -- scripts/common.sh@355 -- $ echo 1 00:18:31.078 03:18:34 -- scripts/common.sh@365 -- $ ver1[v]=1 00:18:31.078 03:18:34 -- scripts/common.sh@366 -- $ decimal 2 00:18:31.078 03:18:34 -- scripts/common.sh@353 -- $ local d=2 00:18:31.078 03:18:34 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:18:31.078 03:18:34 -- scripts/common.sh@355 -- $ echo 2 00:18:31.078 03:18:34 -- scripts/common.sh@366 -- $ ver2[v]=2 00:18:31.078 03:18:34 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:18:31.078 03:18:34 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:18:31.078 03:18:34 -- scripts/common.sh@368 -- $ return 0 00:18:31.078 03:18:34 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:31.078 03:18:34 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:18:31.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.078 --rc genhtml_branch_coverage=1 00:18:31.078 --rc genhtml_function_coverage=1 00:18:31.078 --rc genhtml_legend=1 00:18:31.078 --rc geninfo_all_blocks=1 00:18:31.078 --rc geninfo_unexecuted_blocks=1 00:18:31.078 00:18:31.078 ' 00:18:31.078 03:18:34 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:18:31.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.078 --rc genhtml_branch_coverage=1 00:18:31.078 --rc genhtml_function_coverage=1 00:18:31.078 --rc genhtml_legend=1 00:18:31.078 --rc geninfo_all_blocks=1 00:18:31.078 --rc geninfo_unexecuted_blocks=1 00:18:31.078 00:18:31.078 ' 00:18:31.078 03:18:34 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:18:31.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.078 --rc genhtml_branch_coverage=1 00:18:31.078 --rc genhtml_function_coverage=1 00:18:31.078 --rc genhtml_legend=1 00:18:31.078 --rc geninfo_all_blocks=1 00:18:31.078 --rc geninfo_unexecuted_blocks=1 00:18:31.078 00:18:31.078 ' 00:18:31.078 03:18:34 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:18:31.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.078 --rc genhtml_branch_coverage=1 00:18:31.078 --rc genhtml_function_coverage=1 00:18:31.078 --rc genhtml_legend=1 00:18:31.078 --rc geninfo_all_blocks=1 00:18:31.078 --rc geninfo_unexecuted_blocks=1 00:18:31.078 00:18:31.078 ' 00:18:31.078 03:18:34 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:31.078 03:18:34 -- scripts/common.sh@15 -- $ shopt -s extglob 00:18:31.078 03:18:34 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:18:31.078 03:18:34 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.078 03:18:34 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.078 03:18:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.078 03:18:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.078 03:18:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.078 03:18:34 -- paths/export.sh@5 -- $ export PATH 00:18:31.078 03:18:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.078 03:18:34 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:18:31.078 03:18:34 -- common/autobuild_common.sh@479 -- $ date +%s 00:18:31.078 03:18:34 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1731899914.XXXXXX 00:18:31.078 03:18:34 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1731899914.6X3mtd 00:18:31.078 03:18:34 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:18:31.078 03:18:34 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:18:31.078 03:18:34 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:18:31.078 03:18:34 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:18:31.078 03:18:34 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:18:31.078 03:18:34 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:18:31.078 03:18:34 -- common/autobuild_common.sh@495 -- $ get_config_params 00:18:31.078 03:18:34 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:18:31.078 03:18:34 -- common/autotest_common.sh@10 -- $ set +x 00:18:31.078 03:18:34 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:18:31.078 03:18:34 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:18:31.078 03:18:34 -- pm/common@17 -- $ local monitor 00:18:31.078 03:18:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:31.078 03:18:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:31.078 03:18:34 -- pm/common@25 -- $ sleep 1 00:18:31.078 03:18:34 -- pm/common@21 -- $ date +%s 00:18:31.078 03:18:34 -- pm/common@21 -- $ date +%s 00:18:31.078 03:18:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1731899914 00:18:31.078 03:18:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1731899914 00:18:31.078 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1731899914_collect-cpu-load.pm.log 00:18:31.078 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1731899914_collect-vmstat.pm.log 00:18:32.017 03:18:35 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:18:32.017 03:18:35 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:18:32.017 03:18:35 -- spdk/autopackage.sh@14 -- $ timing_finish 00:18:32.018 03:18:35 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:18:32.018 03:18:35 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:18:32.018 03:18:35 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:18:32.018 03:18:35 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:18:32.018 03:18:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:18:32.018 03:18:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:18:32.018 03:18:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:32.018 03:18:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:18:32.018 03:18:35 -- pm/common@44 -- $ pid=102575 00:18:32.018 03:18:35 -- pm/common@50 -- $ kill -TERM 102575 00:18:32.018 03:18:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:32.018 03:18:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:18:32.018 03:18:35 -- pm/common@44 -- $ pid=102577 00:18:32.018 03:18:35 -- pm/common@50 -- $ kill -TERM 102577 00:18:32.018 + [[ -n 6167 ]] 00:18:32.018 + sudo kill 6167 00:18:32.028 [Pipeline] } 00:18:32.044 [Pipeline] // timeout 00:18:32.049 [Pipeline] } 00:18:32.065 [Pipeline] // stage 00:18:32.072 [Pipeline] } 00:18:32.088 [Pipeline] // catchError 00:18:32.098 [Pipeline] stage 00:18:32.100 [Pipeline] { (Stop VM) 00:18:32.114 [Pipeline] sh 00:18:32.397 + vagrant halt 00:18:34.936 ==> default: Halting domain... 00:18:43.073 [Pipeline] sh 00:18:43.357 + vagrant destroy -f 00:18:45.895 ==> default: Removing domain... 00:18:45.907 [Pipeline] sh 00:18:46.189 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:18:46.199 [Pipeline] } 00:18:46.213 [Pipeline] // stage 00:18:46.218 [Pipeline] } 00:18:46.232 [Pipeline] // dir 00:18:46.236 [Pipeline] } 00:18:46.250 [Pipeline] // wrap 00:18:46.255 [Pipeline] } 00:18:46.268 [Pipeline] // catchError 00:18:46.277 [Pipeline] stage 00:18:46.279 [Pipeline] { (Epilogue) 00:18:46.291 [Pipeline] sh 00:18:46.575 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:18:51.869 [Pipeline] catchError 00:18:51.871 [Pipeline] { 00:18:51.888 [Pipeline] sh 00:18:52.173 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:18:52.173 Artifacts sizes are good 00:18:52.183 [Pipeline] } 00:18:52.199 [Pipeline] // catchError 00:18:52.211 [Pipeline] archiveArtifacts 00:18:52.218 Archiving artifacts 00:18:52.316 [Pipeline] cleanWs 00:18:52.328 [WS-CLEANUP] Deleting project workspace... 00:18:52.328 [WS-CLEANUP] Deferred wipeout is used... 00:18:52.335 [WS-CLEANUP] done 00:18:52.336 [Pipeline] } 00:18:52.352 [Pipeline] // stage 00:18:52.357 [Pipeline] } 00:18:52.372 [Pipeline] // node 00:18:52.377 [Pipeline] End of Pipeline 00:18:52.417 Finished: SUCCESS